SANS Internet Storm Center - Cooperative Cyber Security Monitor
Updated: 9 min 6 sec ago
ISC Stormcast For Wednesday, June 28th 2017 https://isc.sans.edu/podcastdetail.html?id=5562, (Wed, Jun 28th)
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
This is a follow-up from our previous diary about todays ransomware attacks using the new Petya variant. So far, weve noted:
Samples of the new Petya variant are DLL files. So far, weve confirmed the following two SHA256 file hashes are the new variant:
Examining the new Petya variant
Petya is a ransomware family that works by modifying the infected Windows systems Master Boot Record (MBR). Using rundll32.exe with #1 as the DLL entry point, I was able to infect hosts in my lab with the above two DLL samples. The reboot didnt occur right away. However, when it did, my infected host did a CHKDSK after rebooting. border-width:2px" />
After CHKDSK finished, the infected Windows hosts modified MBR prevented Windows from loading. border-width:2px" />
Samples of the new Petya variant appear to have WMI command-line (WMIC) functionality. Others have confirmed this variant spreads over Windows SMB and is reportedly using the EternalBlue exploit tool, which exploits CVE-2017-0144 and was originally released by the Shadow Brokers group in April 2017. border-width:2px" />
Keep in mind this is a new variant of Petya ransomware. Im still seeing samples of the regular Petya ransomware submitted to places like VirusTotal and other locations. From what we can tell, those previous versions of Petya are not related to today border-width:2px" />
New Petya variant ransom message
Ooops, your important files are encrypted.
If you see this text, then your files are no longer accessible, because they have been encrypted. Perhaps you are busy looking for a way to recover your files, but dont waste your time. Nobody can recover your files without our decryption service.
We guarantee that you can recover all your files safely and easily. All you need to do is submit the payment and purchase the decryption key.
Please follow the instructions:
1. Send $300 worth of Bitcoin to the following address:
2. Send your Bitcoin walled ID and personal installation key to e-mail firstname.lastname@example.org. Your personal installation key:
If you already purchased your key, please enter it below.
More reports about the new Petya variant
Sent from a reader earlier today:
A quick check reveals that, apparently, another global ransomware attack is making the rounds today.
Initial reports indicate this is much like last months WannaCry attack. According to the Verge article, todays ransomware appears to be a new Petya variant called Petyawrap. At this point, we see plenty of speculation on how the ransomware is spreading (everything from email to an EternalBlue-style SMB exploit), but nothing has been confirmed yet for the initial infection vector.
Alleged samples of this ransomware include the following SHA256 hashes:
AlienVault Open Threat Exchange (OTX) is currently tracking this threat at:
Well provide more information as it becomes available.(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Has anyone read A Tale of Two Cities, the 1859 novel by Charles Dickens? Or maybe seen one of the movie adaptations of it? Its set during the French Revolution, including the Reign of Terror, where revolutionary leaders used violence as an instrument of the government.
In the previous sentence, substitute violence with email. Then substitute government with criminals. Now what do you have? Email being used as an instrument of the criminals!
I know, I know... No real ties to Dickens novel here. border-width:2px" />
This diary briefly investigates two phishing emails. Its a Tale of Two Phishies I ran across on Monday 2017-06-26.
First example: an unsophisticated phish
The first example went to my blogs admin email address. It came from the mail server of an educational institution in Paraguay, possibly used as a relay from an IP address in South Africa. For email headers, you can only rely on the Received: header right before the message hits your mail server. Anything before that can be spoofed.
Its a pretty poor attempt, because this phishing message is very generic. Im educated enough to realize this didnt come from my email provider. And the login page was obviously fake. Unfortunately, some people might actually be fooled by this.
The compromised website hosting a fake login page was quickly taken off line. You wont be able to replicate the traffic by the time you read this. It border-width:2px" />
Second example: a slightly more complex phish
Every time I see a phishing message like this second example, I hope theres malware involved. border-width:2px" />
Examining the PDF attachment, I quickly realized the criminals had made a mistake. They forgot to put .com at the end of the domain name in the URL from the PDF file. lillyforklifts should be lillyforklifts.com. Id checked the URL early Monday morning with .com at the end of the domain name, and it worked. border-width:2px" />
An elephant in the room
These types of phishes are what I call an elephant in the room. Thats an English-language metaphor. Elephant in the room represents an obvious issue that no one discusses or challenges. These types of phishing emails are very much an elephant in the room for a lot of security professionals. Why? Because we see far more serious issues during day-to-day operations in our networks. Many people (including me) feel we have better things to worry about.
But these types of phishing emails are constantly sent. They represent an on-going threat, however small they might be in comparison to other issues.
Messages with fake login pages for Netflix, Apple, email accounts, banks, and other organizations occur on a daily basis. For example, on Phishtank.com, the stats page indicates an average of 1,000 to 1,500 unique URLs were submitted on a daily basis during the past month. Stats for specific months show 58,556 unique URLs submitted in May 2017 alone.
Fortunately, various individuals on Twitter occasionally tweet about the fake login pages they find. Of course, many people also notify sites like PhishTank, scumware.org, and many other resources to fight this never-ending battle.
So today, its open discussion on these phishing emails. Do you know anyone thats been fooled by these messages? Are there any good resources covering these phishing emails I forgot to mention? If so, please share your stories or information in the comments section below.
ISC Stormcast For Tuesday, June 27th 2017 https://isc.sans.edu/podcastdetail.html?id=5560, (Tue, Jun 27th)
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
[This is the first part of a multi-part a guest diary written byDr. Ali Dehghantanha]
One of the nightmares of any forensics investigator is to come across a new or undocumented platform or application during an investigation with tight deadlines! The investigator has only limited research time to detect evidences hoping not to miss any essential remnants! Fortunately there is a field of research called Residual Data Forensic in which researchers detect and document remnants (evidence) of forensic value of user activities on different platforms. Residual forensic researchers are usually listing minimum evidences that can be extracted by a forensics practitioner.
In one of my recent engagements, I had to investigate BitTorrent Sync version 2.0 on a range of different devices. Back then I used papers authored by Scanlon, M., Farina et al., (Refer to References 1,2,3,4) on the investigation of BitTorrent Sync (version 1.1.82). However, as a redesigned folder sharing workflow has been introduced in the newer version of BitTorrent Sync (from version 1.4 onwards), there is a need to develop an up-to-date understanding of the artefacts from the newer BitTorrent Sync applications.
In a series of diaries I am going to discuss about residual artefacts of BitTorrent Sync version 2.0 on Windows 8.1, Mac OS X Mavericks 10.9.5, Ubuntu 14.04.1 LTS, iOS 7.1.2, iPhone 4 running iOS 7.1.2 and a HTC One X running Android KitKat 4.4.4 (For a more involved reading which include experiment setup and full details of our investigation please refer to our paper titled Forensic Investigation of P2P Cloud Storage: BitTorrent Sync as a Case Study (Reference 5)). Please feel free to comment about any other evidences that you came across in your investigations and/or suggest other investigation approach.
This diary post explains artefacts of directory listings and files of forensic interest of BitTorrent Sync version 2.0 on Windows 8.1, Mac OS X Mavericks 10.9.5, and Ubuntu 14.04.1 LTS.
The downloaded folders were saved at %Users%\[User Profile]\BitTorrent Sync, /home/[User profile]/BitTorrent Sync, and /Users/[User Profile]/BitTorrent Sync on the Windows 8.1, Ubuntu OS, and Mac OS clients by default, respectively. Within the shared folders (both locally added and downloaded) there is a hidden .sync subfolder. The file of particular interest stored within the subfolder is the ID file which holds the folder-specific share ID in hex format. The share ID would be especially useful when seeking to identify peers sharing the same folder during network analysis.
When a synced file was deleted, copies of the deleted file can be recovered from the /.sync/Archive folder of the corresponding peer devices. It is important to note that the deleted files will only be kept in the archive folder for 30 days by default. Copies of the deleted files alongside the pertinent file deletion information (e.g., the original paths, file sizes, and deletion times) can be recovered from the %$Recycle.Bin%\SID folder on Windows 8.1, but the files are renamed to a set of random characters prefixed with $R and $I. On Ubuntu machine, copies of deleted files can be recovered from /home/[User Profile]/.local/share/Trash/files folder. Original file path and deletion time can be recovered from .TRASHINFO files located in /home/[User Profile]/.local/share/Trash/info/. In contrast to Windows and Ubuntu OS, examination of the Mac OSX trash folder (located at /Users/[User profile]/.Trash) only recovered copies of the deleted files. However, it is noteworthy that the findings are only applicable to the system that initiated the file deletion and as long as the recycle bin or trash folder is not emptied. A practitioner could potentially recover the BitTorrent Sync usage information from various metadata files resided in the application folder located at %AppData%\Roaming\BitTorrent Sync on Windows 8.1 and /Users/[User Profile/Library/Application Support/BitTorrent Sync on Mac OSX.
The application folder maintains a similar directory structure across multiple operating systems, and the /%BitTorrent Sync%/.SyncUserRandom number subfolder is an identity-specific application folder that will be synchronised across multiple devices sharing the same identity. The first file of particular interest within the application folder is settings.dat which maintains a list of metadata associated with the device under investigation such as the installation path (which could be distinguished by the exe_path entry), installation time in Unix epoch format (install_time), non-encoded peer ID (peer_id), log size (log_size), registered URLs for peer search (search_list, tracker_last etc.), and other information of relevance. The second file of forensic interest within the application folder is the sync.dat which contains a wealth of information relating to the shared folders downloaded to the device under investigation. In particular, the device name could be discerned from the device entry. The identity entry records the identity name (name) of the device under investigation as well as the private (private_keys) and public keys (public_keys) used to establish connections with other devices. A similar finding was observed for the peer identities in identities entry. A replication of the identity and identities entries can be located in the local-identity-specific /%BitTorrent Sync%/.SyncUserRandom number/identity.dat file and peer-identity-specific /%BitTorrent Sync%/.SyncUserRandom number/identities/[Certificate fingerprint] file (with the exception of the private key) respectively. The access-requests entry holds a list of metadata pertaining to the identities which sent folder access requests to the device under investigation such as the last used IP addresses in network byte order (addr), identity names (name), public keys public_keys) of the requesting identities, as well as base32-encoded temporary keys (invite), requested folder IDs, requested times (req_time), requested permissions (requested_permissions where 2 indicates read only, 3 indicates read and write, and 4 indicates owner), and granted permission (granted_permissions).
Located within the folders entry of the sync.dat file was metadata relating to the synced folders. It should be noted that this entry will never be empty as it will always contain at least an entry for the identity-specific /%BitTorrent Sync%/SyncUserRandom number application folder. Amongst the information of forensic interest recoverable from the folders entry included the folder IDs (folder_id), storage paths (path), the addition and last modified dates in Unix epoch format, the peer discovery method(s) used to share the synced folders, the access and root certificates keys, whether the folders have been moved to trash, and other information of relevance. Correlating the folder IDs recovered from folders entry with the folder IDs located in /%BitTorrent Sync%/SyncUserRandom number/devices/[Base32-encoded Peer ID]\folders\ may determine the shared folders associated with a peer device. Analysis of the access control list (acl) subentry (of the folders entry) can be used to identify the permissions of identities associated with each shared folder, such as the identity names (name), public keys (public_keys), signature issuers, the times when the identities were linked to a specific shared folder, as well as other information of relevance. Similar details can be located in the folder-specific /%BitTorrent Sync%/.SyncUserRandom number/folders/[Folder ID]/info.dat file. The peers subentry (of the folders entry), if available, would provide a practitioner information about the peers associated with the shared folders added by the device under investigation such as the last completed sync time (last_sync_completed), last used IP address (last_addr) in network byte order, device name (name), last seen time (last_seen), last data sent time (last_data_sent), and other relevant information.
Another file of interest which can potentially allow a practitioner to recover the sync metadata is the /%BitTorrent Sync%/[share-ID].db SQLite3 database. This share-ID-specific database describes the content of a shared folder (including the /%BitTorrent Sync%/SyncUserRandom number application folder) such as the shared filenames or folders (stored in the path table field of the files table), hashes, and transfer piece registers for the shared files or folders. Once the shared filenames or folders have been identified, a practitioner may map the details to the /%BitTorrent Sync%/history.dat file (which maintains a list of file syncing events appeared in the History of the BitTorrent Sync client application) to obtain the sync times in Unix epoch format as well as the associated device names width:300px" />
Figure 1: History.dat file
/%BitTorrent Sync%/sync.pid file holds the last used process identifier (PID) which can be used to correlate data with physical memory remnants (e.g., mapping a string of relevance to the data resided in the memory space of investigating PID using the yarascan function of Volatility). It is important to note that all the metadata files aforementioned are Bencoded (with the exception of the sync.pid file) and the old metadata files would have. width:300px" />
Figure 2: com.apple.spotlight.plist
Disconnecting a shared folder, it was observed that no changes were made to the peer devices, even when the option delete files from this device was selected to permanently delete the sync files/folders from the local device. Unlinking an identity from investigated devices, it was observed that the identity-specific /%BitTorrent Sync%/.SyncUserRandom number application folder will be deleted from the local device. However, only the identity-specific metadata will be removed from the identity and identities entries of the local and peer devices settings.dat files.
Undertaking uninstallation of the Windows client application would remove synced folders from folders containing the .sync subfolder in the directory listing. Manual uninstallation of the Linux and Mac client applications left no trace of the client application usage/installation in the directory listing, but (obviously) deleted files/folders were recoverable from the non-emptied /Users/[User profile]/.Trash folder of the Mac OSX VM investigated.
Undertaking data carving of unallocated spaces (of the file synchronisation VMs) could recover copies of synced files as well as the log and metadata files of forensic interest (e.g., sync.log, sync.dat, history.dat, and settings.dat used by the client applications). A search for the terms bittorrent, bencode keys specific to the metadata files of relevance, as well as the pertinent log entries was able to locate copies of the recovered files. The remnants remained even after uninstallation of client applications, which suggested that unallocated space is an important source for recovering deleted BitTorrent Sync or synced files.
Our next post would describe investigation of BitTorrent log files.
1)Scanlon, M., Farina, J. and Kechadi, M. T. (2014a) BitTorrent Sync: Network Investigation Methodology, In IEEE, pp. 2129, [online] Available from: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6980260 (Accessed 11 March 2015).
2)Scanlon, M., Farina, J., Khac, N. A. L. and Kechadi, T. (2014b) Leveraging Decentralization to Extend the Digital Evidence Acquisition Window: Case Study on BitTorrent Sync, arXiv:1409.8486 [cs], [online] Available from: http://arxiv.org/abs/1409.8486 (Accessed 18 March 2015).
3) Scanlon, M., Farina, J. and Kechadi, M.-T. (2015) Network investigation methodology for BitTorrent Sync: A Peer-to-Peer based file synchronisation service, Computers Security, [online] Available from: http://www.sciencedirect.com/science/article/pii/S016740481500067X (Accessed 9 July 2015).
4) Farina, J., Scanlon, M. and Kechadi, M. T. (2014) BitTorrent Sync: First Impressions and Digital Forensic Implications, Digital Investigation, Proceedings of the First Annual DFRWS Europe, 11, Supplement 1, pp. S77S86.
5) Teing Yee Yang, Ali Dehghantanha, Kim-Kwang Raymond Choo, Forensic Investigation of P2P Cloud Storage: BitTorrent Sync as a Case Study, (Elsevier) International Journal of Computers Electrical Engineering, 2016.
Find out more about Dr. Ali Dehghantanha athttp://www.alid.info(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
ISC Stormcast For Monday, June 26th 2017 https://isc.sans.edu/podcastdetail.html?id=5558, (Sun, Jun 25th)
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Traveling with a Laptop / Surviving a Laptop Ban: How to Let Go of "Precious", (Mon, May 29th)
For a few months now, passengers on flights from certain countries are no longer allowed to carry laptops and other larger electronic devices into the cabin. Many news media reported over the last weeks that this policy may be expanded to flight from Europe, or to all flights entering the US. But even if you get to keep your laptop with you during your flight, it is difficult to keep it at your site when you travel. So regardless if this ban materializes or not (right now it looks like it will not happen), this is your regular reminder on how to keep your electronics secure while traveling.
Checking a laptop is considered inadvisable for a number of reasons:
- Your laptop is out of your controland could be manipulated. It is pretty much impossible to secure a laptop if an adversary has control of it for a substantial amount of time. These attacks are called sometimes called evil maid attacks in reference to having the laptop manipulated while it is stored in a hotel room.
- Laptops often are stolen from checked luggage. Countless cases have been reported of airport workers, and in some cases, TSA employees, stealing valuables like laptops from checked luggage.
- Laptops contain lithium batteries which are usually not allowed to be checked as there have been instances of them exploding (and this fact may very likely block the laptop ban)
You are typically not allowed to lock your checked luggage. And even if you lock it, most luggage locks are easily defeated. The main purpose of a lock should be to identify tampering, not to prevent tampering or theft.
Here are a couple of things that you should consider when traveling with your laptop, regardless of where you keep it during your flight:
- Full disk encryption with pre-boot authentication. This is a must of any portable device, no matter where you are flying. You will never be able to fully control your device. Larger devices like laptops are often left unattended in a hotel room, and hotel safes provide minimal security.
- Power your device down. Do not just put it to sleep. For checked luggage, this may even prevent other accidents like overheating if the laptop happens to wake up. But powering the laptop down will also make sure encryption keys can not be recovered from memory.
- Some researchers suggest covering the screws on your laptop in glitter nail polish. Take a picture before departure and use it to detect tampering.
- Take a blank machine, and restore it after arrival from a network backup. This may not be practical, in particular for international travel. But you could do the same with a disk backup, and so far, USB disks are still allowed as carry-on and they are easier to keep with you. Encrypt the backups.
- Take a blank machine and use a remote desktop over the network. Again, this may not work in all locations due to slow network speeds and high costs. But this is probably the most secure solution.
- If you are lucky enough to own a laptop with removable hard drive, then remove it before checking your luggage.
- Before departure, setup a VPN endpoint that allows connections on various ports and via HTTP proxies (e.g. OpenVPN has a mode allowing this). You never know what restrictions you run into. Test the VPN before you leave!
Have a plan for what happens if your laptop is lost or stolen. How will you be able to function? Even if you do not have a complete backup of your laptop with you, a USB stick with important documents that you will need during your trip is helpful, as well as a cloud-based backup. You may want to add VPN configuration details and certificates to the USB stick so you can connect to one if needed. Be ready to use a loaner system for a while with unknown history and configuration to give a presentation, or even to use for webmail access. This is a very dangerous solution, and you should reset any passwords that you used on the loaner system as soon as possible. But sometimes you have to keep going under less than ideal circumstances. Of course, right now, you can still bring your phone onboard, which should be sufficient for e-mail in most cases.
In general, this advice should be obeyed anyway when traveling. It is very hard to stay not leave your laptop unsupervised over a long trip. If you dont trust hotel safes (and you should not trust them), then it may make sense to bring your own lockable container like a Pelikan case with solid locks (Pelikan also makes a backpack that works reasonably well but is a bit bulky and heavy). Dont forget a cable to attach the case to something. Just dont skimp on the locks and again: The goal is to detect tampering/theft, not to prevent it. Any case that you can carry on an airplane can be defeated quickly with a hacksaw or a crowbar, and usually, it takes much less.
Also, see this Ouch! Newsletter about staying secure while on the road:
We do continue to receive reports about DDoS extortion e-mail. These e-mails are essentially spammed to the owners of domains based on whois records. They claim to originate from well-known hacker groups like Anonymous who have been known to launch DDoS attacks in the past. These e-mails essentially use the notoriety of the groups name to make the threat sound more plausible. But there is no evidence that these threats originate from these groups, and so far we have not seen a single case of a DDoS being launched after a victim received these e-mails. So no reason to pay :)
Here is an example of an e-mail (I anonymized some of the details like the bitcoin address and the domain name)
We are Anonymous hackers group.
This particular e-mail was rather cheap. Other e-mails asked for up to 10 BTC.
There is absolutelyno reason to pay any of these ransoms. But if you receive an e-mail like this, there are a couple of things you can do:
And please forward any e-mails like this to us. It would be nice to get a few more samples to look for any patterns. Like I said above, this isnt new, but people appear to still pay up to these fake threats.
ISC Stormcast For Friday, June 23rd 2017 https://isc.sans.edu/podcastdetail.html?id=5556, (Fri, Jun 23rd)
ISC Stormcast For Thursday, June 22nd 2017 https://isc.sans.edu/podcastdetail.html?id=5554, (Thu, Jun 22nd)
Malicious files are generated and spread over the wild Internet daily (read: hourly). The goal of the attackers is to use files that are:
Thats why many obfuscation techniques existto lure automated tools and security analysts. In most cases, its just a question of time to decode the obfuscated data. A classic technique is to use the XOR cypher. This is definitively not a new technique(see a previous diary from 2012) but it still heavily used. And many tools can automate the search for XORd string. Viper, the binary analysis and management framework, is a good example. It can scan for XOR padding:5px 10px"> viper tmpnYaBJs xor -a [*] Searching for the following strings: - This Program - GetSystemDirectory - CreateFile - IsBadReadPtr - IsBadWritePtrGetProcAddress - LoadLibrary - WinExec - CreateFileShellExecute - CloseHandle - UrlDownloadToFile - GetTempPath - ReadFile - WriteFile - SetFilePointer - GetProcAddr - VirtualAlloc - http [*] Hold on, this might take a while... [*] Searching XOR [!] Matched: http with key: 0x74 [*] Searching ROT viper tmpnYaBJs padding:5px 10px"> var bcacfdfaebbbfDeck = new ActiveXObject(dbdbfaeefccaee(+L+^%^LK%,LpL(KeL^%z%+%u%u
I took some time to check how the obfuscation was performed. How does it work?
The position of each character is searched in the $data variable and decreased by one. Then the character at this position is returned to build a string of hexcodes. Finally, the hex codes are converted into the final string. Example with the two first characters of the example above:
Last months entertainment for many of us was of course the wannacray ms17-010 update. For some of you it was a relaxing time just like any other month. Unfortunately for the rest of us it was a rather busy period trying to patch systems that in some cases had not been patched in months or even years. Others discovered that whilst security teams have been saying you want to open what port to the internet? firewall rules were approved allowing port 445 and in other cases even 139. Another group of users discovered that the firewall that used to be enabled on their laptop was no longer enabled whilst connected to the internet. Anyway, that was last month. On the back of it we all made improvements to our vulnerability management processes. You did, right?
Ok, maybe not yet, people are still hurting. However, when an event like this happens it is a good opportunity to revisit the process that has failed, identify why it went wrong for you and make improvements. Not the sexy part of security, but we cant all be threathunting 24/7.
If you havent started yet or the new process isnt quite where it needs to be where do you start?
Once you have the four core ingredients you are now in a position to know what vulnerabilities are present and hopefully patchable. You know the systems that are most affected by them and have the highest level of risk to the organisation.
The actual mechanics of patching is individual to each organisation. Most of us however will be using something like WSUS, SCCM or Third-party patching products and/or their linux equivalents like satellite, puppet, chef, etc. In the tool used, define the various categories of systems you have, reflecting their criticality. Ideally have a test group for each, Dev or UAT environments if you have them can be great for this. I also often create a The Rest group. This category contains servers that have a low criticality and can be rebooted without much notice. For desktops, I often create a test group, a pilot group and a group for all remaining desktops. The pilot group has representative of most if not all types of desktops/notebooks used in the organisation.
When patches are released they are evaluated and if they are to be pushed they are released to the test groups as soon as possible. Basic functionality and security testing is completed to make sure that patches are not causing issues. Depending on the organisation we often push DEV environments first, then UAT after a cycle of testing. Within a few hours of being released you should have some level of confidence that the patches are not going to cause issues. Your timezone may even help you here. In AU for example patches are often released during the middle of our night. Which means in other countries they may already have encountered issues and reported them (keep an eye the ISC site) before we start patching.
The biggest challenge in the process is getting a maintenance window to reboot. The best defence against having your window denied is to schedule them in advance and get the various business areas to agree to them. Patch releases are pretty regular so they can be scheduled ahead of time. I like working one or even two years in advance.
The second challenge is the testing of systems post patching. This will take the most prep work. Some organisations will need to get people to test systems. Some may be able to automate tests. If you need people, organise test teams and schedule their availability ahead of time to help streamline your process. Anything that can be done to get confidence in the patched system faster will help meet the 48 hour deadline.
If going fast is too daunting, make the improvements in baby steps. If you generally patch every 3 months. Implement your own ideas, or some of the above and see if you can reduce it to two months. Once that is achieved try and reduce it further.
If you have your own thoughts on how people can improve their processes, or you have failed (we can all learn from failures) then please share. The next time there is something similar to wannacry we all want to be able to say sorted that ages ago.
Mark H - Shearwater
ISC Stormcast For Wednesday, June 21st 2017 https://isc.sans.edu/podcastdetail.html?id=5552, (Wed, Jun 21st)
This please let us know.
Recently, I was confronted with a scenario where a very suspicious Windows pop-up message was shown to a specific user on a corporate network. It was a kind of Yes/No default Windows Dialog Box that, although I cannot reveal the message content, I can assure you that it was in the context of what the user was doing on his computer at that moment.
As we were dealing with a major incident on the same network, our first assumption was that someone had compromised that machine and was controlling it remotely through a reverse connection - the type of situation that urges for a rapid response.
However, after a few hours hunting for any piece of malware on that machine, including operating system events, network connections, user Internet history, e-mail attachments, external devices and so on, nothing interesting was found. In fact, the evidence came from a source Ive never imagined could help me on an incident response. It came from Windows Error Reporting (WER), as described in this diary.
As no malware evidence was found, we decide to get back to the drawing board, and after looking carefully at the strange message, I noticed that, whatever application had been used by the attacker to present the message, it has hanging. The classic (Not Responding) width:332px" />
Figure 1 Not Responding application sample
By default, when an application hangs or crashes on a Windows system, the Windows Error Reporting (WER) mechanism  automatically gathers detailed debug information including the application name, loaded modules and, more important, a heap dump, which comprehends the data that was loaded in the application at the time that the memory was collected. All this data is reported to Microsoft that, in turn, may provide users with solutions for known problems.
As the application used to send the strange message has hanged, the chances are that we could find generated WER artifacts do analyze and track the supposed intrusion. Thus, our next step was looking for them.
To demonstrate how we found and analyzed WER files related to that hanged application without exposing real incident information, weve created a similar scenario and used it for this analysis.
Using a Windows 10 default installation machine in our lab, the first thing was forcing an application to crash. For this purpose, we used the text editor application Notepad++ as the application to be crashed and Process Explorer tool  as the means to cause it.
For further analyses purposes, we typed a simple text on the editor, as seen in Figure 2 and, through the Process Explorer, started killing aleatory ntdll.dll width:566px" />
Figure 2 width:366px" />
Figure 3 Killing application threads
It didn width:401px" />
Figure 4 width:517px" />
Figure 5 Application event log evidence
Note that the event ID for crashed application has the value 1000 while for hangeing applications, the value is 1002.
The other evidence are the WER files themselves which, depending on the Windows version are generated in different paths and can be found through different control panel menu options. On Windows 7, for example, WER settings and reporting access can be found through Action Center and on Windows 8 through Problem Reports and Solutions.
On Windows 10, used in our demonstration scenario, the WER menu can be opened through the menu Control Panel - System and Security - Security and Maintenance - width:478px" />
Figure 6 Looking for the specific problem report
Figure 7 WER problem details
Another way to find WER files is going directly path they are created on the disk. On Windows 10, WER report files can be reached through the path: %SystemDrive%\ProgramData\Microsort\Windows\WER width:567px" />
Figure 8 width:567px" />
Figure 9 WER file list
Now, making a parallel to the real incident case, when we searched for event log evidence, we could find that an application hanged on that machine moments before the message screenshot time. Better than that, we also could find the WER files associated to that application hang!
You may be thinking right now how I could find WER files in the machine as they are deleted from disk after being sent to Microsoft. The point is: they weren
The WER report wasn width:523px" />
Figure 10 Problem uploading WER during the MITM attack
Heading back to the real scenario, with WER files in our hands, we could discover the name of the possible application that generated that suspicious pop-up message and, by inspecting the heap dump file we could confirm it. It turns out that we found exactly the pop-up message content into the memory dump file using a simple strings command although there exist an orthodox way to inspect and debug those files using Windbg .
Employing the same strings width:567px" />
Figure 11 Evidence found
As we could see, in addition to helping Windows users to deal with application crashes and hangs, this case demonstrated that WER can be extremely useful for post-mortem analysis. Depending on the scenario, its like having an application memory dump to analyze as part of your DFIR activities without having collected it during the incident.
On the other hand, it raises some concerns regarding data leaking through the memory dump files. Considering that you have consented to send those information to Microsoft (remembering or not that you have done that ), there exists the possibility of those content to be accessed by third parts, like intruders that escalated the privileges on the targeted machine or simple by that new employee that is now using your machine and you thought that removing your user home directory could be enough.
Things may get worse if we consider that the crashed or hanged application is a password manager, for example. We did experiments on a group of them and privately reported those that allowed us to recover clear text passwords from WER memory dumps. The Enpass password manager has already published a security bulletin and a new version fixing the vulnerability  for which the CVE 2017-9733  has been associated.
For Windows application developers in general, to prevent sensitive information exfiltration from crash dumps, we recommend either completely disabling WER triggering by using AddERExcludedApplication or WerAddExcludedApplication functions  or by excluding the memory region that may contain sensitive information using the function WerRegisterExcludedMemoryBlock  (available only on Windows 10 and later).
A more comprehensive solution should be provided by Windows itself that could protect report files by encrypting them - at least the memory dumps. Interestingly, there is a patent from IBM exactly about protecting application core dump files . Today, the encryption is employed only while sending WER report files to Microsoft through SSL connections.
Regarding our case, in the end, fortunately realized that there was no violation or intrusion on that machine. It was, indeed, a misuse of a legitimate tool by an internal employee that made us learn a bit more the importance of WER files to digital forensics and users privacy.
ISC Stormcast For Tuesday, June 20th 2017 https://isc.sans.edu/podcastdetail.html?id=5550, (Tue, Jun 20th)
One of our readers (thanks Gebhard) mailed us a link to an article on what the press is apparently now calling a Revenge Wipe - a system administrator who has left the organization, and as a last hurrah, deletes or locks out various system or infrastructure components.
In this case, the organization was a hosting company in the Netherlands (Verelox). In the case of cloud providers, a disgruntled admin may have access to delete entire networks, hosts, and associated infrastructure. In the case where its a smaller CSP, the administrator may also have access to delete customer servers and infrastructure as well. In Vereloxs situation, that seems to have been the case (from their press release at least)
The classic example of this is the City of San Francisco in 2008), where their main administrator (Terry Childs) refused to give up the credentials to their FiberWAN Network Infrastructure, even after being detained by law enforcement (he eventually did give the credentials directly to the Mayor). Ive listed several other examples in the references below - note that this was not a new thing even in 2008 - this has been a serious consideration for as long as weve had computers.
So, how should an organization protect themselves from a situation like this?
Back up Job Responsibilities:
Know who has access to what. Have multiple people with access to each system. Having any system with only a single administrator can turn into a real problem in the future. DOCUMENT things. BACKUP your configurations in addition to your data.
It can be difficult, but wherever possible use Admin accounts with only the rights required. Its very easy to build an every Admin has all rights infrastructure. Its likely more difficult to build a why does the VMware admin need the rights to delete an entire LUN on the San config but its important to think along those lines wherever you can.
Use a back-end directory for authentication to network infrastructure:
What this often means is that folks implement NPS (RADIUS) services in Active Directory. This allows you to audit access and changes during regular production, and also allows you to deactivate network administrator accounts in one place
Where you can, use Two Factor Authentication
Use 2FA whereever possible, this makes password attacks much less of a threat. 2FA is a definite easy implement for VPN and other remote access, also for administration of almost all Cloud Services for your organization.
Just as a side note - I am still seeing that many smaller CSPs have not gone forward with 2FA - if you are looking at any new Cloud services, adding Two Factor Authentication as a must-have is a good way to go.
Deal with Stale Accounts:
Keep track of accounts that are not in use. I posted a powershell script for this (targeting AD) in a previous story == https://isc.sans.edu/diary/The+Powershell+Diaries+-+Finding+Problem+User+Accounts+in+AD/19833
Deal with Service Accounts:
Service accounts are used in Windows and other operating system to run things like Windows Services, or to allow scripts to login to various systems as they run. The common situation is that these service accounts have Domain Administrator or local Root access (depending on the OS).
Know in your heart that the person you are protecting the organization from is the same person who likely created one or all of these accounts.
Be sure that these service accounts are documented as they are created, so that if a mass change is required it can be done quickly.
Know that these use a central directory (such as AD or LDAP), so that if you need to change them or disable them, there is one place to go.
I posted a PowerShell script in a previous story to inventory service accounts in AD == https://isc.sans.edu/forums/diary/Windows+Service+Accounts+Why+Theyre+Evil+and+Why+Pentesters+Love+them/20029/
Restrict Remote Access:
Be sure that your administrative accounts dont have remote access (VPN, RDP Gateway, Citrix CAG etc). This falls into the same category as dont allow Administrators to check mail or browse the internet while logged in as a Domain Admin or root privileges.
On the day:
On the day of termination, be sure that all user accounts available to our administrator are deactivated during the HR interview. If youve used a central authentication store this should be easy (or at least easier)
Also force a global password change for all users (your departing admin has probably done password resets for many of your users), and if you have any stale accounts simply deactivate those.
For Service accounts, update the passwords for all of these. This is a good time to be sure that you arent following a pattern for these passwrods - use long random strings for these (L33t speak versions of your company or product name are not good choices here).
Im sure that Ive missed some important things - please, use our comment for to fill out the picture. This is a difficult topic, since many of us are admins for one thing or another this really hits close to home. But for the same reason, its important that we deal with it correctly, or as correctly as the situation allows.
Sysinternals 6.03 is out. Bug fixes only, no new features https://blogs.technet.microsoft.com/sysinternals/2017/06/17/sysinternals-update-sysmon-v6-03/, (Mon, Jun 19th)
=============== Rob VandenBrink Metafore(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
ISC Stormcast For Monday, June 19th 2017 https://isc.sans.edu/podcastdetail.html?id=5548, (Mon, Jun 19th)
When it comes to log collection, it is always difficult to figure out what to to capture. The primary reasons are cost and value. Of course you can capture every logs flowing in your network but if you dont have a use case to attach to its value, that equals to wasted storage and money. Really not ideal since most Security Information Management (SIM) also referred to Security Information and Event Management (SIEM) have a daily cost associate with log capture. Before purchasing a SIM, the first task that is often difficult is, what do I collect and why? We want quality over quantity. Again, what you collect has a cost, the minimum amount of time logs are retained (how many years) must be calculated because it directly related to the number of events per second (EPS) collected daily , how many log collector are necessary to capture what you need, etc.
Next, it is important to identify your top five use cases, based on value that can have an immediate impact with the security team. This part is often difficult to pin point because it usually isn identify the log source (firewall, IPS, VPN, etc.), its category (user activity, email, proxy, etc.) , its priority (high, medium, low), information type (IP, hostname, username, etc.) and matching use case (authentication, suspicious outbound activity, web application attack, etc.). The last step is to identify the SIM that will meet your goals.