Sunday, June 24, 2012

Bug Bounty Programs - big cash for bugs!

Alex Miller, a 12-year-old tech wiz from San Jose received a check in the post for $3,000 from Mozilla. It was a reward for tracking down a critical security flaw in the Firefox web browser. 25-year-old Aaron Portnoy has been tracking down bugs since he was barely into his teens. He first realised it could be a potentially lucrative career when tax collectors from the American Internal Revenue Service came calling, wondering how he had made $60,000. Aaron was not even 20 at the time. Sergey Glazunov was paid $60,000 for finding a security flaw in Chrome Browser on a fully patched Windows 7 system.
These are just some high profile examples of individuals who got the limelight from the press. But, i think you get the point. Finding security holes in widely used software is financially rewarding these days. If you want to participate in some of these programs, here are the details.

Google
Rewards for qualifying bugs range from $100 to $20,000. The following table outlines the usual rewards for the anticipated classes of bugs:

accounts.google.comOther highly sensitive services [1]Normal Google applicationsNon-integrated acquisitions and other lower priority sites [2]
Remote code execution$20,000$20,000$20,000$5,000
SQL injection or equivalent$10,000$10,000$10,000$5,000
Significant authentication bypass or information leak$10,000$5,000$1,337$500
Typical XSS$3,133.7$1,337$500$100
XSRF, XSSI, and other common web flaws
$500 - $3,133.7
(depending on impact)
$500 - $1,337
(depending on impact)
$500$100
[1] This category includes products such as Google Search (https://www.google.com) Google Wallet (https://wallet.google.com), Google Mail (https://mail.google.com), Google Code Hosting (code.google.com), and Google Play (https://play.google.com).
[2] Note that acquisitions qualify for a reward only after the initial 6 month blackout period has elapsed.

Mozilla
The bounty for valid critical client security bugs will be $3000 (US) cash reward and a Mozilla T-shirt. The bounty will be awarded for critical and high severity security bugs that meet the following criteria:
Security bug is present in the most recent supported, beta or release candidate version of Firefox, Thunderbird, Firefox Mobile, or in Mozilla services which could compromise users of those products, as released by Mozilla Corporation or Mozilla Messaging.

PayPal
Paypal is a new entrant to this field. They have not disclosed any details about the extent of reward. But, the obvious mode of payment is through PayPal.


Facebook
Facebook has fixed a typical bounty to be $500 which maybe increased for specific bugs. Any exploit that could compromise the integrity of Facebook user data, or circumvent the privacy protections of Facebook user data.

ZDI
This is the Zero Day Initiative from TippingPoint software which was acquired by Hewlett-Packard. This is unique in that bugs for third-party software are accepted. As a member of the ZDI program, you earn points each time a vulnerability submission is purchased. Points are treated in a manner similar to airline frequent flyer miles - points accrue each year on a dollar-for-dollar basis based on the total amount paid for vulnerability submissions by the researcher during that calendar year. For instance, if the Zero Day Initiative buys your vulnerability for $5,000, then you receive 5,000 points for that submission. For all of calendar year 2008, if you received 37,000 points, then for calendar year 2009 you will be considered to have ZDI Gold status. The following are the various levels of ZDI Reward membership:
ZDI Rewards


Some lesser known ones:

Barracuda
The bounty starts at $500 for qualifying bugs. The following security products by Barracuda Networks:
  • Barracuda Spam & Virus Firewall
  • Barracuda Web Filter
  • Barracuda Web Application Firewall
  • Barracuda NG Firewall
The panel may reward up to $3,133.7 for particularly severe bugs. You may opt to donate your bounty to a charity. Additionally, we will credit your work as a bug/vulnerability reporter if you desire. Only the first report of a bug qualifies. (Why $3,133.7? The number pays homage to “eleet”. This is used by some in the security community as slang for elite and is sometimes referred to as 31337.)

Piwik
The bounty for valid critical security bugs is a $500 (US) cash reward. The bounty for non-critical bugs is $200 (US), paid via Paypal. The bounty will be awarded for security bugs that meet the following criteria:

  • Security bug must be original and previously unreported
  • Security bug is present in the most recent supported or release candidate version of Piwik
  • If two or more people report the bug together the reward will be divided among them


Ghostscript
Artifex software rewards folks who find bugs in their proprietary interpreter for postscript language and for PDF. Accepted fixes for bugs at P1 and P2 pay a bounty of US$1000 each. Bugs at lower priority pay US$500 per bug.

Hex-Rays
Hex-Rays will pay a 3000 USD bounty for certain security bugs in their proprietary IDA or Decompiler applications. What security bugs will be considered:
  • Security bugs must be original and previously unreported and not fixed yet.
  • Security bugs with high or critical impact are eligible (remote code execution, privilege escalation, etc).
  • Security bugs must be in the Hex-Rays code (not in third party/contributed code). In some cases we may take responsibility for third-party code as well.
  • Security bugs must be present in the latest public release of IDA/Decompiler.

Wednesday, June 20, 2012

Bromium Microvisor - is it the future of security?

Simon Crosby, the founder of the Xen hypervisor announced today about his latest venture called Bromium at a conference. Is it a co-incidence that the name rhymes with the Chromium project from Google? Well, the commonalities are sandboxing and security. While the Chromium project has tried to implement a browser and an O.S based on a browser interface, Bromium will try to implement a layer under the O.S which can run a browser or for that matter any other application.
Virtualization has evolved from being a buzzword to a widely accepted and implemented technology in recent times. VMWare was one of the pioneers of software virtualization. Then, Intel came out with its hardware virtualization concept which resulted in hardware based hypervisors. The purpose of hypervisors is to try and create multiple virtual machines running independent O.Ses on the same physical machine. But, a microvisor differs from this in that it can virtualize applications. Since, sandboxing is at the heart of this implementation, it would mean each of the virtualized applications would be mutually exclusive. Concept-wise, this is very futuristic. If its implemented and accepted by the industry, we might be at the beginning of a new era in application and software security. But, it remains to be seen whether the hype of microvisor lives up to its word.

Bromium's products are built using the Bromium Microvisor. It is a tool that can create lightweight virtual machines, called micro-vms. As many as 150 micro-vms can be created on a typical desktop system with 4GB of memory, it is claimed. The entire tool is about 10,000 lines of code. 

To understand the implementation of a micro-vm better, here's a snippet from the whitepaper.


When a micro-VM is created its access to these system resources is restricted according to a set of simple, task and trust-level related resource policies. Whenever the protected task attempts to access any restricted system resource the virtualization hardware forces a CPU VM_EXIT, suspending execution of the task and giving control of the CPU to the Microvisor to arbitrate access using the resource policies for the task.When a micro-VM executes, its (VT-managed) memory map contains a representation of Windows and the necessary DLLs required for execution, plus the task state. Access to OS services is restricted by the resource policies, and any changes that the task makes to OS memory or the golden file system is “Copy-on-Write” (CoW). Thus, if the task is compromised by malware that modifies the Windows kernel or white-listed DLLs, it will only succeed in modifying a local copy, and not the IT provisioned, golden Windows.Each micro-VM is presented a with a narrowed view of the file system that contains only the files it needs – an implementation of the principle of “least privilege” - with CoW semantics. If malware modifies a file, the Microvisor will ensure that it only modifies a copy of the file. Any files modified or saved by a micro-VM are stored efficiently as block-deltas against the original file, which remains unchanged until the micro-VM exits (the user closes a window, or the task terminates). At this point the Microvisor discards the task’s memory image and uses a persistence policy for the task to save relevant task files (if any), and to decide whether to persist any new files. Any persisted files are securely tagged with the trust level of the micro-VM, and all access to untrusted files must be made from within another micro-VM.

Sunday, June 10, 2012

iOS Security - Scratching the surface!

Apple Inc. recently published details about iOS security. Its interesting to note that the first iPhone was released in July 2007. iPhone hackers claimed to have pwned iOS within a few days of of the official release.

Here's a high level architectural diagram of iOS representing the different layers of iOS from the pdf.

Source: Apple
What caught my interest was the fact that the entire file system is encrypted. Yes, FDE (Full Disk Encryption) as it called is implemented in iOS. FDE can be complex, performance hungry and a pain to configure at times. Configuration wise, Apple took away the pain by turning it ON by default on your iPhone. Performance wise, the engine to handle encryption, Crypto Engine in the hardware on the iOS thereby reducing the operational overhead. Here's an excerpt describing the implementation: Every iOS device has a dedicated AES 256 crypto engine built into the DMA path between the flash storage and main system memory, making file encryption highly efficient. Along with the AES engine, SHA-1 is implemented in hardware, further reducing cryptographic operation overhead. Security wise, the root certificate for encryption is fused into the processor during manufacture. This would mean if you were to replace the memory chips from one device to another, the files would be inaccessible.

The other point of interest to me was App Code signing. Here's an excerpt that aptly describes this feature: To ensure that all apps come from a known and approved source and have not been tampered with, iOS requires that all executable code be signed using an Apple-issued certificate. Apps provided with the device, like Mail and Safari, are signed by Apple. Third-party apps must also be validated and signed using an Apple-issued certificate. Mandatory code signing extends the concept of chain of trust from the OS to apps, and prevents third-party apps from loading unsigned code resources or using self- modifying code.

Till date, there have been no known instances of malware on the iOS. An exception to this was Ikee, an iPhone worm which stole financially sensitive information from infected iPhones. But, it only ever spread on a iOS which was jailbroken.