Black Hat Federal 2006

Date: Sat, 28 Jan 2006 01:05:50 +0100
Black Hat Federal 2006 Wrap-Up, Part 5


Please see part 1 for an introduction if you are reading this article separately.

Next I heard Stefano Zanero discuss problems with testing intrusion detection systems. He said that researchers prefer objective means with absolute results, while users prefer subjective means with relative results. This drives the "false positive" debate. Researchers see false positives as failures of the IDS engine to work properly, while users see any problem as the fault of the whole system.

Stefano mentioned work done by Giovannii Vigna and others on the Python-based Sploit, which creates exploit templates and mutant operators to test IDS'. He also cited a ICSA Labs project that doesn't appear to have made much progress developing IDS testing methodologies. Stefano said that good IDS tests must include background traffic; running an exploit on a quiet network is a waste of time. Stefano is developing a test bed for network traffic generation in the context of testing IDS'. He lamented there was no "zoo list" of attacks currently seen in the wild, as is the case with viruses and related malware.

Stefano claimed that vendors who say they perform "zero day detection" are really "detecting new attacks against old vulnerabilities." I don't necessarily agree with this, and I asked him about "vulnerability filter"-type rules. He said those are a hybrid of misuse and anomaly detection. Stefano declared that vendors who claim to perform "anomaly detection" are usually doing protocol anomaly detection, meaning they identify odd characteristics in protocols and not odd traffic in aggregate. He reminded the audience of Bob Colwell's paper If You Didn't Test It, It Doesn't Work. He also said that vendor claims must be backed up by repeatable testing methodologies. Stefano made the point that a scientist could never publish a paper that offered unsubstantiated claims.

I spoke with Stefano briefly and was happy to hear he uses my first book as a text for his undergraduate security students.

After Stefano's talk I listened to Halvar Flake explain attacks on uninitialized local variables. I admit I lost him about half way through his talk. Here is what I can relate. An uninitialized local varable is a variable for which memory has been allocated, but not used later. The idea is to find those variables and insert code of the attacker's choice there, to be later executed. The problem revolves around the fact that the stack is not cleaned when items are popped off, for performance reasons. Ok, that's about it.

Here are a few general thoughts on the talk. Twice Halvar noted that finding 0-days is not as easy as it was before. Bugs are getting harder to exploit. In order to test new exploitation methods, Halvar can't simply find 20 0-days in an application and run his exploits against those vulnerabilities. He also can't write simple yet flawed code snippets as test against those, since they do not adequately reflect the complexity of modern applications. What he ends up doing is "patching in" flawed code into existing applications. That way he ends up with a complex program with known problems, against which he can try novel exploitation methods.

Halvar made heavy use of graphical depictions of code paths, as shown by his company's product BinNavi. This reminded me of the ShmooCon reverse engineering BoF, where many of the younger guns expressed their interest in graphical tools. As the problem of understanding complex applications only grows, I see these graphical tools as being indispensible for seeing the bigger picture.

For points of future research, Halvar wonder if there were uninitialized heap variables that could be exploited. He said that complexity cuts two ways in exploit development. Complex applications sometimes give intruders more freedom of manuever. They also may exploitation more difficult because the order in which an application allocates memory often matters. Halvar mentioned that felinemenace.org/~mercy addressed the same subject as his talk.

Robert Graham, chief scientist of ISS, gave the last talk I saw. He discussed the security of Supervisory Control And Data Acquisition (SCADA) systems. SCADA systems control power, water, communication, and other utilities. Robert does domestic and foreign penetration testing of SCADA systems, and he included a dozen case studies in his talk.

For example, he mentioned that the Blaster worm shut down domestic oil production for several days after a worker connected an infected laptop into a diagnostic network! Who needs a hurricane? Robert also told how a desktop negotiation for a pen test resulted in a leap from the corporate conference room, via wireless to a lab network, via Ethernet to the office network, through a dual-home Solaris box to the SCADA network. At that point the prospective client said "Please stop." (ISS received approval from the client to make each step, I should note.) In another case, Robert's team found a lone Windows PC sitting in a remote unlocked shed that had complete connectivity via CDPD modem to a SCADA network.

The broad outline of his conclusions include:

  • Patches are basically forbidden on SCADA equipment.
  • SCADA systems require little or no authentication.
  • Little or not SCADA traffic is encrypted.
  • Despite SCADA industry claims, SCADA systems are publicly accessible from the Internet, wireless networks, and dial-up. The "air gap" is a myth.
  • There is little to no logging of activity on the SCADA networks.

In sum, Robert said that SCADA "executives are living in a dreamworld." They provide network diagrams with "convenient ommissions" of links between office and SCADA/production segments. They are replacing legacy, dumb, RS-232 or 900 MHz-based devices with WinCE or Linux, Ethernet or 802.11-based devices. Attackers can jump from the Internet, to the DMZ, to the office network, to the SCADA network. They do not have to be "geniuses" to figure out how to operate SCADA equipment. The manuals they need are either online or on the company's own open file servers. SCADA protocols tend to be firewall-unfriendly, as they often need random ports to communicate. SCADA protocols like the Inter Control Center Protocol (ICCP) or OLE Process Control (OPC, a DCOM-based Microsoft protocol) are brittle. OPC, for example, relies on ASN.1.

At the end of the talk Robert said there was no need to panic. I asked "Why?" and so did Brian Krebs. Robert noted that SCADA industry threat models point to "accidents, not determined adversaries." That is a recipe for disaster. The SCADA network reminds me of the Air Force in the early 1990s. I will probably have more to say on that in a future article.

I hope you enjoyed this conference wrap-up. I look forward to your comments.
By: Richard Bejtlich
Send by mail Print  Save  Delicious 
Date: Fri, 27 Jan 2006 22:12:37 +0100
Black Hat Federal 2006 Wrap-Up, Part 4


Please see part 1 for an introduction if you are reading this article separately.

I finished Wednesday listening to Irby Thompson and Mathew Monroe discuss FragFS, a way to use the Windows Master File Table (MFT) on NTFS to store data covertly. The MFT can be read as a file if you open C:/$MFT as the administrator. That file can even be written to by administrators, hence the proof of concept tools "hammer.exe" and "looker.exe" provided by the presenters. Their research indicates the average MFT can store around 36 MB of hidden data, and that commercial tools neither review nor understand data hidden in the MFT. Beyond their userland implementation, the pair also wrote a Windows device driver that provides greater functionality. They will not release that code for fear of its misuse.

Incidentally, prior to this talk I met Sam Stover, who gave me two FragFS stickers for my laptop. Thanks Sam.

On Thursday I started with Dr. Arun Lakhotia, who explained problems with analyzing adversarial code. His main point was that tools currently used to investigate malware were built for programmers solving development problems, not secuirty people analyzing suspicious binaries. He outlined three types of analysis problems.

  1. Some problems can be solved in polynomial time, meaning finding a solution is not difficult.
  2. Some problems can only be solved in nonpolynomial time, meaning that with infinite resources, they theoretically can be solved.
  3. Some problems, however, are undecidable. There is no exact solution, regardless of resources. Approximation is the best answer.

Dr. Lakhotia said disassembly is an undecidable problem. He described a few examples to make his point.

He also noted that although there are huge numbers of distinct pieces of malware (12,000 in the last Symantec Internet Threat Report), there are really a very small number of malware families. In other words, code reuse and plagarism is rampant in the malware world. Using this fact, Dr. Lakhotia demonstrated novel ways to decipher call obfuscation and reverse malware to a single "common form." Keep an eye on his Web site for a place where malware will be categorized according to families in a system called "VILO". I also learned of VX Heavens, a malware collection site. I guess Offensive Computing is similar.

Next I heard my friend Kevin Mandia talk about recent incident response and computer forensics cases he and his team have worked. He stated that while investigating 215 suspected compromised systems in the last three years, he could only conclusively say 103 were 0wned. Of those, only 32 revealed enough evidence to demonstrate the intruder's point of entry. Why? The team seldom had the time, audit records, or network logs to figure out what had happened.

Kevin said that modern incident response in corporate America is characterized by "vague reporting channels" and "processes that are shelf-ware." Companies approach IR as a "directionless infantry march" instead of a "precision blitzkrieg." Kevin then outlined common indicators of compromise.

  1. The number one detection method is internal end users. When their systems crash, their anti-virus refuses to start, they cannot "save as" documents, install new applications, run common applications, or start Task Manager, they are likely compromised.
  2. Surge in bandwidth usage
  3. Anti-virus hits: these do not mean the problem has been dealt with. Rather, it's the tip of the iceberg.
  4. IDS detection is rare, but can be effective during the IR.
  5. Customers are often a helpful source of detection indicators.

Kevin recommends enabling process tracking, to which I would add exporting the resulting logs via syslog to a central reporting server farm. He shared a few tips for understanding system processes, like using "tasklist /svc" on Windows XP and "psservice -a" to see what processes are started by the Windows svchost.exe, a sort of "inetd" for Windows.

Kevin's team demonstrated their new First Response client-server architecture for Windows. You will see me describe this as soon as I can try the beta. Suffice it to say that this free program will rock the IR world. It makes retrieving and analyzing live response data a snap.
By: Richard Bejtlich
Send by mail Print  Save  Delicious 
Date: Fri, 27 Jan 2006 21:43:38 +0100
Black Hat Federal 2006 Wrap-Up, Part 3


Please see part 1 for an introduction if you are reading this article separately.

Staying on the rootkit theme, I next heard Joanna Rutkowska discuss "Rootkit Hunting vs. Compromise Detection." She has done some impressive work on network-based covert channels, but she is also a rootkit guru. Joanna talked about "Explicit Compromise Detection," and the need to scan kernel memory for integrity checking. She challenged many of the ideas of traditional rootkits, such as the need to survive a reboot, the desire to hide processes, open sockets, and so on. It seems like her new DeepDoor rootkit is an all-in-one package that hooks the Windows Network Driver Interface Specification (NDIS) code by modifying four words in the NDIS data section of memory.

She demonstrated her ddcli client talking to a DeepDoor'd victim. The client communicated with the server over port 445 TCP. Fair enough, but port 445 TCP was also able to handle normal SMB traffic, even with the rootkit active! That is insane. She showed how her rootkit could still function even with Zone Alarm denying access.

Joanna emphasized that there is no safe way to read kernel memory on Windows. She said that even reading physical memory can be tough. She requested that Microsoft implement a means to let third party vendors reliably read kernel memory. She said that such a new feature would not aid attackers, since they do not care if their unreliable methods end up crashing a target. A security vendor, however, must take extra care. Joanna noted that next generations operating systems should ship with more than two CPU privilege modes, and that Trusted Computing will not prevent the attacks she described. She mentioned the introduction of a hypervisor that runs at ring -1 (todays systems descend to ring 0). Joanna also postulated that there may be a finite number of places for malware to hook an OS, so perhaps it would be helpful to enumerate them in a public place. A related project is her Open Methodology for Compromise Detection.

Joanna was not able to release her DeepDoor rootkit for reasons of "NDAs." She was also not able to discuss ongoing work on network covert channels for the same reason. On a personal note, I spoke with Dave Aitel (note he has cut his hair WAY back from what's shown in the photo!) who had a tough time pronouncing my name. I guessed that as a fellow Eastern European (I'm American but my ancestors are from that area), Joanna (who is Polish) would be able to pronounce "bate-lik." Joanna was sitting nearby, and sure enough, she could!

After hearing about rootkits for three straight talks, I took a break by hearing Simson Garfinkel discuss new directions for disk forensics. (He reminded the audience of his company Sandstorm Enterprises, and I learned by speaking with him that he sells a laptop version of NetIntercept for consultants like me. )

Simson spoke for a long time discussing his ongoing used hard drive analysis project. He introduced his cross-drive forensic analysis methodology, which involves finding interesting data on groups of hard drives. One of the most powerful techniques was building histograms of email addresses. On a single hard drive, the most frequently seen email address is usually the address of the hard drive owner. He also searched hard drives for patterns associated with credit cards. The interesting aspect of this sort of analysis is that he is reviewing raw data in all cases, such that he can even review something like an Oracle data drive that has no conventional partitions.

I was most excited to hear about Simson's Advanced Forensic Format project. He noted that images produced by dd are big and contain no metadata. Proprietary formats like the Encase E01 are "bad an undocumented." Simson promotes AFF as an open standard that will be intergrated into a future release of Brian Carrier's Sleuth Kit. AFF contains tools that do more than efficiently image and describe drives. They acquisition tools can even help bring old drives to life by pulsing and otherwise manipulating them.

The most thought-provoking aspect of Simson's presentation was his discussion of the market for used hard drives on eBay. He says people pay unreasonable amounts for small old hard drives, and defintely odd amounts for hard drives reported as broken. The implication is that those hard drives might be bought by criminals hunting for sensitive information. (Simson gave examples of such data during his presentation.) He is working to educate people that "format" does not mean "erase," and he hopes Microsoft will replace the current format command with a tool that truly zeroes out a drive. Simson also said he is unaware of any technique to retrieve data from a zeroed-out hard drive, saying that Peter Gutmann's 1996 techniques would no longer work on drives built since then due to the density of modern drives.
By: Richard Bejtlich
Send by mail Print  Save  Delicious 
Date: Fri, 27 Jan 2006 21:11:22 +0100
Black Hat Federal 2006 Wrap-Up, Part 2


Please see part 1 for an introduction if you are reading this article separately.

The first technical talk I attended was presented by Mariusz Burdach, titled "Finding Digital Evidence In Physical Memory." Mariusz really needed two hours or more to give his topic justice. He started his talk buy holding up DoD and DoJ manuals which recommend pulling the plug as an incident response step (argh), and he said commercial tools all focus on inspecting hard drives. Unfortunately, modern rootkits may stay in non-swappable memory pages, and will not touch the hard drive. Therefore, traditional victim hard drive forensic practices may be useless against modern techniques.

Mariusz named three anti-forensic methods.

  1. Data contraception: do not create data on the hard drive; keep everything in memory
  2. Data hiding: keep processes from appearing in task or process lists
  3. Data destruction: remove suspicious information on the file system

He mentioned a few cool examples.

  • The Core Security syscall proxy as a means to not write any files to disk when loading malicious programs into memory on a target system.
  • The Metasploit SAM Juicer dumps Windows
    password hashes from a Meterpreter shell without writing any files to disk.
  • Hacker Defender has commercial antidetection modules.
  • Jamie Butler's FU and Shadow Walker (collaboration with Sherri Sparks) are impressive.

Marius briefly discussed software- and hardware-based means to acquire victim memory. On the hardware side he noted Tribble, a PCI card that can read system memory. On a related noted, I ate lunch on Wednesday with Jamie Butler. His new company Komoku is working on the Copilot host monitor, another PCI card mentioned by Mariusz. If you don't have a PCI card already in the victim system, you might be able to acquire or change memory via Firewire. I missed this when it was originally announced, but now I realize it's a huge issue.

The most relevant aspect of Mariusz's talk was his announcement of two tools for reviewing physical memory dumps. The first is the Windows Memory Forensic Toolkit (WMFT) and the second is Idetect, for Linux. These look very interesting, and I believe Mariusz will release new versions once he returns to Poland. Mariusz' talk and several that followed emphasized that memory absolutely must be analyzed when performing incident response.



Next I saw John Heasman from NGS Software present "Implementing and Detecting an ACPI BIOS Rootkit." John was the best speaker at BH Federal, in terms of content and delivery style. His presentation (.pdf) has already seen some coverage. The problem centers on the fact that Advanced Configuration and Power Interface can be used to read and write sensitive areas of targets, like system memory. For example, ACPI could be used to disable all access control on a Windows system by extracting ACPI Machine Language (AML) from a target BIOS, finding inititialization control methods, appending ACPI Source Language (ASL) to implement the SeAccessCheck exploit, recompiling into machine language, flashing the BIOS, and rebooting the system. Linux has a similar problem where the sys_ni_syscall exception handler could be patched.

John brought up very interesting points about rootkits. He asked whether they always need to be active, or if they could simply activate at random times to frustrate detection. He said the bootable CDs that use ACPI would be as vulnerable as the OS installed on a hard drive, making life tougher for incident responders. Sure, ACPI can be disabled, but that may disable some device drivers too. John said that ACPI debugging and Windows event logs may yield clues to ACPI exploitation, so stay alert. He also mentioned that ACPI could be modified such that fans never activate. Combine that with a process that starts the CPUs spinning and you have a software-based way to destroy a machine!

Keep in mind that a BIOS rootkit would not be a traditional rootkit. It would be used to infect a target, and then code on the target would open back doors and so on. The BIOS only offers "tens of KB" of space, according to John.

This reinforces my point that rootkits make NSM more relevant than ever. Now all we need is a Cisco router or switch rootkit.
By: Richard Bejtlich
Send by mail Print  Save  Delicious 
Date: Fri, 27 Jan 2006 20:37:50 +0100
Black Hat Federal 2006 Wrap-Up, Part 1


I attended two days of Black Hat Federal Briefings 2006. I paid my own way, and I must say the conference was worth every penny. If you didn't attend, I highly recommend registering for next year's conference. I spoke briefly with Jeff Moss, who said Black Hat will return to DC in February 2007 for another Federal conference. This is welcome news. I taught Foundstone's Ultimate Hacking: Expert class at Black Hat Federal 2003, which was the last Black Hat conference in DC.

My summaries cannot do most of the speakers justice. I will attempt to offer highlights for most talks, along with links to relevant techniques or tools.

Jeff Moss began the conference by noting its main theme: paranoia. After attending many of the sessions, I understand why. Jeff didn't want Federal to be "Las Vegas-lite," and I think he succeeded in assembling a conference that truly delivered.

Dr. Linton Wells II from DoD offered the keynote. He briefly discussed the Quadrennial Defense Review, which will be delivered to Congress on 6 Feb. He lamented the fact that the DoD budgets in 6 year increments, beyond which the department has to look 10 more years. He asked the audience to consider what the world was like in 1990 compared to today. How could planners in a pre-Gulf War, Soviet-facing, Internet-minimal world anticipate the current landscape? He mentioned that the DoD Directive 3000.05, "Military Support for Stability, Security, Transition, and Reconstruction (SSTR) Operations," dated November 28, 2005, emphasizes the traditional non-combat activities like network defense are on par with combat operations.

With regards to threats facing DoD, Dr. Wells said the threat is the "patient, skilled, well-resourced adversary with intent to do harm." (Dr. Wells did not say a hole in OpenSSH is a threat!) He noted that US Strategic Command has command over DoD networks now, and that DISA is trying to "minimize the number of connections from the Internet to the NIPRNet." DoD has recognized and is beginning to treat NIPRNet as the "command network" that it is, especially for logistics and health care users. Dr. Well said classic security labels (unclassified, secret, etc.) "just don't work anymore," and current 30, 45, or 60 day patch cycles "have to change." DoD has even spoken with Google about how that company decides how to internally select and fund projects on 3-6 development cycles.

I asked Dr. Wells about the security stand-down that happened in November. (More details here He said "we have a problem, and people need to pay attention to it." He said the stand-down included a password change and patching of applications, and that DoD has about 100,000 people with sys admin duties. I followed up with a question to two of Dr. Wells' team about DoD usage of Snort, given that Sourcefire was purchased by Checkpoint -- an Israeli company. They said there was "concern at high levels," and that a deputy secretary of defense had just been briefed on the issue on Tuesday. They emphasized that, in the future, DoD might require vendors to provide source code of their products to "assure the pedigree of their software." DoD is worried about foreign elements introducing back doors into code. Finally, of the $450 billion spent by DoD each year, $29-30 billion is IT-related. Of that amount, about $2 billion is IA-related.
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值