Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Saturday, September 8, 2012

Cognitive Side Channels

A recent media buzz this week involves so-called "side channel" attacks or leakages of information from human brain to computer interfaces.  Not a ubiquitous technology today, but quite possibly down the road.

Essentially the attacks follow the lines of showing a plugged-in subject a bank, in which case the subject's mind races down the neural paths for things like account numbers, PINs, maybe balances or recent expenditures, etc.  And the mere thoughts picked up by the device can capture these otherwise private thoughts inside the subject's brain.

Sound scary?  It is.  The brain wasn't designed to keep information from itself.  Count us out of the "early adopter program".

Reminds us of the time the Ghostbusters were told the worst thing they could think of would be their next enemy:

Wednesday, April 11, 2012

Measuring Wallets' Contents with Metal Detectors

Turns out it may be more than RFID protection you may need for your wallet. New Scientist reports how some academics at University of Washington - Seattle can exploit some of the metallic features of dollar bills to count the money in your wallet. An excerpt:
They found an ordinary handheld metal detector was able to pick up a dollar bill from 3 centimetres away, and placing the notes behind plastic, cardboard and cloth did little to block the signal. Adding further bills in $5 increments increased the strength of the signal, making it is possible to count the number of bills, though converting this into an actual dollar value would be difficult as notes of different denominations contain the same amount of magnetic ink.
Using larger metal detectors such as those found in airports should also increase the range of sensing, though detecting banknotes in such situations would be trickier as many other sources could interfere with the signal.

Thursday, March 22, 2012

In Memory: Bill Zeller

Having just caught up on the positive achievements of Alex Halderman, we have discovered that another promising young mind has been lost forever.

Bill Zeller was also a member of Ed Felten's Freedom to Tinker crew and a PhD student at Princeton. Bill also is known for creating MyTunes, which was one of the first implementations to prove Digital Rights Management (DRM) is really the emperor without clothes.

You can read Bill's tragic story here, or his own words here. Even by those who did not directly know him, he will be missed.

Wednesday, March 21, 2012

Alex Halderman on Internet Voting

Computer Science Professor J. Alex Halderman is an upcoming academic star that we at Securology have been watching for awhile now, since some of the earliest days of all the great work put on at Princeton by Ed Felten and his Freedom to Tinker group (an excellent blog). Halderman, having completed his PhD (Bell Labs UNIX and C programming language creator Brian Kernighan was on his PhD committee!), has moved on to the University of Michigan as a member of the faculty and is continuing his excellent work at the intersection of technology and public policy, which always means security and privacy issues are in the spotlight.
Link
Here is an excellent interview with Halderman (presumably shot at the 2012 RSA Conference) on how he and his students (legally) hacked the mock trial of Internet Voting put on by Washington D.C., and why Internet Voting should not be employed for a very long time.

In summary, there are two main reasons why Internet Voting is a horrible idea:
  1. Getting the software perfectly correct is, for all intents and purposes, impossible.
  2. Authenticating a voter eliminates the ability to anonymize the voter's vote (major privacy flaw).

Thursday, February 23, 2012

John Nash Crypto Letters

John Nash (inspiration for A Beautiful Mind) wrote letters to the US Government in the 1950s which have recently been deLinkclassified and releasedLink to the public. Now you can read for yourself how John Nash was essentially 20 years ahead of the world around him in his plans for a cryptographic system.

The questions to ask yourself are: Did the US Government sit on this devised plan? If so, why? If not, how long were they keeping it to themselves before moving on to something better?

One thing can certainly be learned from history: powerful governments and organizations have kept tight lips on the cryptography they use, and stay far ahead of the power curve. Simon Singh covers the topic well in his book: The Code Book. (If this topic is even partially interesting to you, then you should at least read that book, if not buy a copy.) At each step along the way in recorded history, there have been parties wishing to keep their communications confidential, and significant disparities in the technology to do so. Charles Babbage's scheme for breaking the Vigenère Cipher comes to mind.

Saturday, August 16, 2008

The Case of MIT Subway Hackers

By now, you may have read about a group of MIT Students who were set to present some insecurity details of the "CharlieCard" subway system used in Boston and elsewhere. [I myself had wondered just exactly what data resides on the magnetic stripes of a similar transit card from the Washington DC subway system, just this summer. It doesn't take much hard thinking to realize that a requirement for these machines is that they have to work regardless of whether there is connectivity back to the central office to validate a transit card; ergo, there must be monetary values stored on the card.] The transit authority didn't like it and got a federal judge to issue a gag order. A lot of good that did, because MIT still has the presentation (and other items) posted here. And now the Electronic Frontier Foundation is fighting the legal battle for the students. But the real question here (which stirs the disclosure debate once again) is: Who is at fault here?

Short version: everybody is at fault.

Long version ...

On the one hand, this group of students, led by world renown Ron Rivest (that's right, the "R" in the "RSA" encryption alogorithm), only informed the transit authority a week before the presentation that they were going to give the talk at DEFCON. That's right, just 1 week. And their advisor, Ron Rivest, was out of town for at least part of that time. The CFP (Call For Papers) closed on May 15. DEFCON's policy is to notify submissions of acceptance within two business days. So the MIT undergrads should have known no later than Monday, May 19, that they were going to be giving the talk. This wasn't impromptu. In order to get accepted, they would have had to bring the goods. So, they clearly knew enough to start the communication process with the Transit Authority. Giving the MBTA less than a week to respond (this is the bureaucratic US government we're talking about-- nothing gets done in one week) certainly put them on the defensive. That was a stupid mistake by the MIT crew.

On the other hand, fueled by a stick-your-head-in-the-sand and an I-hate-academic-research pair of attitudes, the Transit Authority's gag order really sets a dangerous precedent. Two Ivy League Computer Science Professors with quite the reputation when it comes to security, Matt Blaze at U Penn and Columbia University's Steve Bellovin, have spoken out against this bad precedent, arguing that it stifles needed future research. They also signed a letter from the EFF in an attempt to overturn the ruling (PDF). Here is the list of Professors and Researchers who signed the letter:
  • Prof David Farber, Carnegie Mellon
  • Prof Steven M Bellovin, Columbia University
  • Prof David Wagner, UC Berkeley
  • Prof Dan Wallach, Rice University
  • Prof Tadayoshi Kohno, U Washington
  • Prof David Touretzky, Carnegie Mellon
  • Prof Patrick McDaniel, Penn State University
  • Prof Lorrie Faith Cranor, Carnegie Mellon
  • Prof Matt Blaze, U Penn
  • Prof Stefan Savage, UC San Diego
  • Bruce Schneier
We have to acknowledge that public disclosure of paradigms of attack and defense are key to our success. Note that "paradigms" does not include or insinuate specific attack details, nor does it applaud the bug parade that some (in)security researchers use to justify their careers' existence. Whether or not the MIT kids were just another set of bug paraders, a restraining order needs to be carefully considered, much more so than what was offered here. Transparency in design is what keeps the best security systems in place.

Another fault is that both the Federal courts and the MBTA thought they could control the dissemination of the information solely with a gag order. Have they any clue exactly what sort of people attend Blackhat and DEFCON every August in Las Vegas? The injunction was late; the CDs with presentation materials were already printed and distributed to paying attendees. Good luck getting that content back (even if MIT wasn't bold enough to keep it posted). And there's another HUGE side-effect by gagging presenters at DEFCON: it spurs interest in the roughly 6 or 7 thousand attendees (and all of the others around the world who didn't attend for one reason or another) to take what little knowledge is known (i.e. subway payment system cards can be hacked to get free rides) and make an all-out war against the people trying to withold the information (the MBTA). Even if the talk wasn't one of the best this year, it now holds an elusive status, which is more than enough justification for a bunch of people (who are paid to understand how systems break) to spend extra time figuring out the forboden details. That was plain stupid on the MBTA's part. Even if they thought the students were criminal or inciting criminal fraud, they should have taken another approach (e.g. finding a way to make them financially liable for lost revenue due to fraud committed by these techniques-- not that I condone that action, it just would have been more effective).

How could this have gone better?
Well, Ron Rivest, on behalf of his students, could have contacted the MBTA months in advance. They could have scheduled full briefs with the appropriate audiences without the pressure to immediately act (which is what resulted in the MBTA calling the FBI and pursuing a fraud investigation against the students).

Saturday, May 3, 2008

Automating Exploitation Creation

Some academic security researchers at Carnegie Mellon have released a very compelling paper which introduces the idea that just monitoring a vendor's patch releases can allow for an automated exploit creation. (They call it "APEG".) They claim that automated analysis of the diffs between a pre-patched program and a patched program is possible-- and that in some cases an exploit can be created in mere minutes when some clients take hours or days to check in and install their updates! Granted there is some well established commentary from Halvar Flake about the use of the term "exploit" since the APEG paper really only describes "vulnerability triggers" (Halvar's term).

Our friends at Carnegie Mellon have proved that the emperor hath no clothes. Creating exploits from analyzing patches is certainly not new. What is novel, in this case, is how the exploit creation process is automated:
"In our evaluation, for the cases when a public proof- of-concept exploit is available, the exploits we generate are often different than those publicly described. We also demonstrate that we can automatically generate polymorphic exploit variants. Finally, we are able to automatically generate exploits for vulnerabilities which, to the best of our knowledge, have no previously published exploit."

Friday, March 21, 2008

More Broken DRM

From Slashdot:

"In July 2007, Richard Doherty of the Envisioneering Group (BD+ Standards Board) declared: 'BD+, unlike AACS which suffered a partial hack last year, won't likely be breached for 10 years.' Only eight months have passed since that bold statement, and Slysoft has done it again. According to the press release, the latest version of their flagship product AnyDVD HD can automatically remove BD+ protection and allows you to back-up any Blu-ray title on the market."
How many more times must we endure the faulty logic of DRM (Digital Rights Management)? It's simple, that is if you understand key management. You cannot have a ciphertext (the Blu-ray movie) that you allow an end-user to convert to a plaintext (i.e. when it's playing in a hardware or software player) without also allowing plaintext access to the key that unlocks the ciphertext (which all players must have, otherwise the video is just encrypted data-- not playable).

DRM defies the laws of nature. It's just like the recent cold-boot attacks on disk encryption. The decryption keys are there. They're in the software. If you can manipulate the hardware, you can get them. And sometimes (as is the case with the BD+ hack) you don't even have to manipulate the hardware. The keys have to be stored somewhere-- usually in memory just like the whole disk encryption vendors. In fact, a possible application of the Princeton group's research could be to cold boot computers that are playing BD+ protected blu-ray discs, since they came up with new methods of finding (identifying) encryption keys stored in decaying DRAM, correcting the bit-flip decay.

Even if the Blu-ray people mandated that only hardware Blu-ray devices could be created and sold (since software players have been the primary target for DRM destruction), the keys would have to exist in every one of their customer's homes-- right there in the players! It might be a little more difficult to reverse engineer and discover since hardware tends to not be as flexible as software, but the keys would have to be there, stored in CMOS perhaps, or possibly just hard-coded into the decryption-playback circuits. And we have seen, time and time again, that the efforts of even a single person to reverse engineer the decryption key can be devastating to DRM schemes. All it takes is one person to discover it and a company like Slysoft to find a way to legally market it.


...
In summary: DRM is not possible. If you present data to a person outside of your physical reach, then you cannot control how they use the data. Anyone who claims otherwise is peddling the information security equivalent to perpetual motion. Don't buy it.

Saturday, March 8, 2008

Anderson Proves PIN Entry Devices are Insecure

If there is a theme in good security research right now, it's that we cannot trust hardware.

Ross Anderson and company at the Computer Laboratory at Cambridge University have performed some interesting research demonstrating how a paperclip can be used to steal cardholder data from a bank card PIN Entry Device (PED). Machines believed to be secure because they were assessed through the weakest level of the esteemed Common Criteria are apparently ripe with flaws. The Cambridge group believes that fraudsters have been using these techniques for some time.

Thursday, February 21, 2008

Felten Destroys Whole Disk Encryption

Ed Felten and company publicized some research findings today on a form of side-channel attack against whole disk encryption keys stored in DRAM.

We show that disk encryption, the standard approach to protecting sensitive data on laptops, can be defeated by relatively simple methods. We demonstrate our methods by using them to defeat three popular disk encryption products: BitLocker, which comes with Windows Vista; FileVault, which comes with MacOS X; and dm-crypt, which is used with Linux....

Our research shows that data in DRAM actually fades out gradually over a period of seconds to minutes, enabling an attacker to read the full contents of memory by cutting power and then rebooting into a malicious operating system....
Interestingly, if you cool the DRAM chips, for example by spraying inverted cans of “canned air” dusting spray on them, the chips will retain their contents for much longer. At these temperatures (around -50 °C) you can remove the chips from the computer and let them sit on the table for ten minutes or more, without appreciable loss of data. Cool the chips in liquid nitrogen (-196 °C) and they hold their state for hours at least, without any power. Just put the chips back into a machine and you can read out their contents.
This is deadly for disk encryption products because they rely on keeping master decryption keys in DRAM. This was thought to be safe because the operating system would keep any malicious programs from accessing the keys in memory, and there was no way to get rid of the operating system without cutting power to the machine, which “everybody knew” would cause the keys to be erased.
Our results show that an attacker can cut power to the computer, then power it back up and boot a malicious operating system (from, say, a thumb drive) that copies the contents of memory. Having done that, the attacker can search through the captured memory contents, find any crypto keys that might be there, and use them to start decrypting hard disk contents. We show very effective methods for finding and extracting keys from memory, even if the contents of memory have faded somewhat (i.e., even if some bits of memory were flipped during the power-off interval). If the attacker is worried that memory will fade too quickly, he can chill the DRAM chips before cutting power.
This is a good example of academic security research. We need to see that the trust placed upon the hardware by the whole disk encryption software is a faulty decision.

There's even a video:

Wednesday, October 24, 2007

DNS Re-Binding

One of the biggest problems with security threats in the Web 2.0 world is the erosion of trust boundaries. Dynamic web applications pull data together in "mashups" from multiple sources. Modern online business seems to require rich, dynamic code and data, yet as fast as we can field it, there are problems keeping boundaries cleanly separate.

Some researchers at Stanford have spent some time identifying some very critical problems involving the use of DNS in the Web 2.0 world. It's not insecurity research, though, because they provide some options for solutions, addressing the problem conceptually.

[Image taken from PDF document]

Basically, the problem--which can be accomplished for less than $100 in capital expenses-- consists of changing the IP address to which a DNS record points, after a "victim" connects to the malicious web server. This allows for circumvention of firewall policies, since the connections are initiated by victim clients. The paper also discusses how this method could be used to obfuscate or distribute the origins of spam or click fraud.

The solution, though, is simple. DNS records can be "pinned" to an IP Address instead of allowing it to live only transiently in memory for a few seconds. And DNS records pointing to internal or private (non-routable) IP address ranges should be handled with caution.

It's also interesting to note that DNSSEC does nothing to prevent this problem, since this is not a question of the integrity (or authenticity) of the DNS records from the DNS server; the DNS server is providing the malicious records. It's not a man-in-the-middle attack (although that could be a way to implement this, potentially).


And on a side note ... This is yet another example of why firewalls are pretty much useless in a modern information security arsenal. We cannot hide behind OSI Layers 3-4 anymore when all the action is happening in layers 5-7 (or layer 8).

Monday, October 22, 2007

Open Source Trustworthy Computing

There is a pretty good article over at LWN.net about the state of Trustworthy Computing in Linux, detailing the current and planned support for TPMs in Linux. Prof Sean Smith at Dartmouth created similar applications for Linux back in 2003, when the TPM spec was not yet finalized.

Of the TPM capabilities discussed, "remote attestation" was highlighted significantly in the article. But, keep in mind that Linux, being a monolithic kernel, has more components that would need integrity checking than there are available PCRs in the TPMs. The suggested applications (e.g. checking the integrity of an "email garden" terminal by the remote email server) are a stretch of capabilities.

Also keep in mind that unless coupled with trust at the foundational levels of memory architecture, integrity-sensitive objects could be replaced in memory by any device that has access via DMA. IOMMUs or similar will be required to deliver the solution fully.

And on that note, Cornell's academic OS - Nexus has a much better chance of success, because of the limited number of components that live in kernel space. The fewer the items that need "remote attestation", the more likely the attestation will be meaningful at all. At this point, modern operating systems need to simplify more than they need to accessorize, at least if security is important.

Thursday, October 18, 2007

Cornell's Nexus Operating System

I hold out hope for this project: Cornell University's Nexus Operating System. There are only a few publications thus far, but the ideas are very intriguing: microkernel, trustworthy computing via TPMs (they're cheap and pervasive), "active attestation" and labeling, etc.


And it's funded by the National Science Foundation. I hope to see more from Dr. Fred Schneider and company.

Zealots and Good Samaritans in the Case of Wikipedia

Dartmouth College researchers Denise Anthony and Sean W. Smith recently released a technical report of some very interesting research around trustworthy collaboration in an open (and even anonymous) environment: Wikipedia.

What is interesting in this research is the interdisciplinary approach between Sociology and Computer Science, analyzing the social aspects of human behavior and technical security controls within a system. Far too often are those overlapping fields ignored.

Also of interest, Wikipedia's primary security control is the ability to detect and correct security failures, not just prevent them (as so many security controls attempt to do). In an open, collaborative environment, correction is the best option. And since Wikipedia, in true wiki form, keeps every edit submitted by every user (anonymous or not), there is a wealth of information to mine regarding the patterns of (un)trustworthy input and human-scale validation systems.