"In 5 years, nobody will deploy new servers for applications. Every application will run on a virtual machine. Inevitably. And as we approach this end-state, more and more of our applications are virtualized, and performance becomes a bigger concern....So, now you know why you need to read Nate’s posts, right?
[There] is a hole in the X86 VMM security strategy. Today, if you want your guest VMs not to have to trust each other, and one of them needs direct access to NIC, you have to trust that the NIC can’t be coerced into copying network packets with executable code directly into the kernel memory of the other machines....
You only need to read Nate’s posts if you (a) believe virtualization will be ubiquitous within the next 5 years, (b) work in security, and (c) believe you need to sound like you know what you’re talking about. Otherwise, Nate’s posts are entirely optional."
(noun) securology.
Latin: se cura logia
Literally translated: the study of being without care or worry
Friday, September 28, 2007
Thomas Ptacek on DMA, Virtualization, and Nate Lawson
Thomas Ptacek makes an important comparison between network security and memory/hardware resource allocation, especially in terms of virtualization. This is an excellent follow-up to Nate Lawson's posts that I mentioned early today. From Thomas' post:
Code Review vs. Application Layer Firewall
In a recent sales attempt, I received a whitepaper (warning: PDF) from Web Application Layer Firewall vendor Imperva entitled: "The New PCI Requirement: Application Firewall vs. Code Review".
For those not familiar with the Payment Card Industry (PCI) compliance regulations, visit the PCI DSS (Data Security Standard) website.
From my perspective, PCI is an excellent set of security compliance regulations, perhaps the best that exist in the U.S. Items are clear, concise, and specific (sometimes too specific), which is in stark contrast to other regulations, such as SOX, HIPAA, GLBA, etc.
Here's the specific PCI requirement in question:
We (the IT and Security field) know how to achieve higher security assurance in critical applications: thorough source code analysis coupled with simple design. However, we know that for many (if not most) organizations to adopt this mindset, it will require an overhaul for their existing software lifecycles. To put it bluntly: few people are truly analyzing their design and their code.
So in to our rescue is a typical information security vendor, ready to offer a hybridized second-rate solution to the problem. [To be fair, I think it is a well engineered solution, but for the wrong problem-- the problem of managing an application without truly understanding its code.]
Now to jump in and dissect their whitepaper ...
If the assurance of externally facing applications is of utmost importance, then design/implementation time controls (e.g. static analysis) should be explored, not runtime controls (i.e. application layer firewalls).
...
UPDATED 12/10/2007: Here's a follow-up ...
For those not familiar with the Payment Card Industry (PCI) compliance regulations, visit the PCI DSS (Data Security Standard) website.
From my perspective, PCI is an excellent set of security compliance regulations, perhaps the best that exist in the U.S. Items are clear, concise, and specific (sometimes too specific), which is in stark contrast to other regulations, such as SOX, HIPAA, GLBA, etc.
Here's the specific PCI requirement in question:
6.6 Ensure that all web-facing applications are protected against known attacks by applying either of the following methods:If you're bored already, you can sum up this problem as a typical 'business case for better engineering' problem.
• Having all custom application code reviewed for common vulnerabilities by an organization that specializes in application security
• Installing an application layer firewall in front of web-facing applications.
Note: This method is considered a best practice until June 30, 2008, after which it becomes a requirement.
We (the IT and Security field) know how to achieve higher security assurance in critical applications: thorough source code analysis coupled with simple design. However, we know that for many (if not most) organizations to adopt this mindset, it will require an overhaul for their existing software lifecycles. To put it bluntly: few people are truly analyzing their design and their code.
So in to our rescue is a typical information security vendor, ready to offer a hybridized second-rate solution to the problem. [To be fair, I think it is a well engineered solution, but for the wrong problem-- the problem of managing an application without truly understanding its code.]
Now to jump in and dissect their whitepaper ...
"What is an “Application Layer Firewall”?This is stopping point #1. How is it possible that a vendor-- who has NEVER seen your organization's application-- can create a drop-in box that can "understand the full spectrum" of your application's traffic??? Again, to be blunt: they cannot. They can make guesses about commonalities among applications they have seen in the past (e.g. like customers whose applications were extensively analyzed for the purpose of product development). They may be even able to code in some inference engines that can make educated guesses at runtime, but they cannot ever truly "understand the full spectrum".
An application layer firewall, also known as a “Web Application Firewall” or “WAF” is a network device that is placed in front of the Web applications in an organization’s data center to protect against attacks. A WAF is able to view and understand the full spectrum of application traffic so that it can protect the applications and the sensitive data from illegitimate access and usage." [Italics are mine]
"Because of SecureSphere’s unique Dynamic Profiling capabilities, it automatically builds a complete baseline profile of your applications and network traffic in a matter of days. Using the application profile, SecureSphere can distinguish between legitimate user behavior and illegitimate behavior as well as protect from attacks... No manual intervention or tuning is necessary, keeping your on-going administrative costs far lower than other WAF products." [Italics are mine]So, yes, it is an inference engine; note the italicized portions of the quote above. What is striking is the claim that this behavioral analysis tool does not require human intelligence for tuning, which means either: 1) the engine is tuned to avoid false positives at the expense of neglecting true positives, or 2) the Marketing department is (ahem) over-estimating their product's ability. Any behavioral analysis tool requires extensive tuning.
"While code review is a good idea, and is consistent with coding best practices, hiring consultants entails extra cost, loss of flexibility, resource allocation issues, scheduling headaches, and ultimately a lower grade of security than you would achieve through the deployment of a Web Application Firewall." [Italics are mine]What amazes me in the world of marketing is that it's an acceptable practice to both claim a universal truth and debunk it in the same sentence. The whitepaper notes that code review is a "best practice", yet before a breath is taken attempts to claim that code review is a lower security assurance process when compared to using an application layer firewall, which simply doesn't make sense. Drop-in products work blind-- they do not know your organization's applications and they can only make inferences regarding them. Human Intelligence (this assumes training and expertise, hence the word "intelligence") will certainly trump; understanding the application is inherent to the process of good coding review and implementation.
If the assurance of externally facing applications is of utmost importance, then design/implementation time controls (e.g. static analysis) should be explored, not runtime controls (i.e. application layer firewalls).
...
UPDATED 12/10/2007: Here's a follow-up ...
Nate Lawson on DMA/IOMMU
In the past few days, Nate Lawson has a couple interesting articles on PC Memory Architecture and Protecting Memory from DMA using IOMMU. Nate's got some good stuff in there, including comments around ring 0 code and virtualization ... many ideas that play well with the notion of separating code from data.
Sunday, September 23, 2007
More comments on the PDF vulnerability
Matasano has some comments on the recent PDF vulnerability:
"Modern PDF Readers do crazy things. Like embed remote web pages. That means they talk to the internet. That means more network attack surface!"From the overview (page 33) of Adobe's PDF Reference Manual:
"In addition to describing the static appearance of pages, a PDF document can contain interactive elements that are possible only in an electronic representation. PDF supports annotations of many kinds for such things as text notes, hypertext links, markup, file attachments, sounds, and movies. A document can define its own user interface; keyboard and mouse input can trigger actions that are specified by PDF objects. The document can contain interactive form fields to be filled in by the user, and can export the values of these fields to or import them from other applications." [italics are mine]The spec sounds almost like a general purpose Operating System, not a document data format. And since the data and the code are not well separated, Dave at Matasano is right:
"These conditions create the perfect storm for the modern attacker. This is going to get worse not better."I'm afraid there will be more holes found in PDFs, perhaps even to the point of businesses questioning the viability to remove PDF support from their systems.
Friday, September 21, 2007
Still more separation of code and data
Separating code from data is a HUGE problem (possibly a root of all remote code execution evil). Here's more info, some of it new, some of it very old ...
There's a variety of vulnerabilities discovered recently that all involve document file types that either: A) allow executable code or script (which should be treated as code, not text data) to be embedded within the document, or B) to have buffer overruns because of the complexity of the file types. [Special thanks to the blog post at Hackademix.net for the summary of recent events and for his contribution of NoScript.]
Schneier's blog also pointed readers to a paper from 2002 regarding Multics, an "an operating system from the 1960s, and had better security than a lot of operating systems today" (Schneier).
From the paper on Multics:
To be fair, some OSes have limited support for NX (No Execute) bits (which if you're smart you'll catch that to be "black lists" not "white lists" like Multics uses), but the implementation's quality and thoroughness is typically limited to support from the CPU architecture employed. From the list, though, it looks like a fair number of the 64 bit CPUs are well supported.
Windows Vista will support NX (along with ASLR), but there may be some caveats, such as the typical requirement of vendors to opt-in to use the feature. From AMD:
So, what's today's lesson? Either separate code from data or pay for it eternally as you penetrate and patch.
There's a variety of vulnerabilities discovered recently that all involve document file types that either: A) allow executable code or script (which should be treated as code, not text data) to be embedded within the document, or B) to have buffer overruns because of the complexity of the file types. [Special thanks to the blog post at Hackademix.net for the summary of recent events and for his contribution of NoScript.]
Schneier's blog also pointed readers to a paper from 2002 regarding Multics, an "an operating system from the 1960s, and had better security than a lot of operating systems today" (Schneier).
From the paper on Multics:
"Multics also avoided many of the current buffer overflow problems through the use of three hardware features. First and most important, Multics used the hardware execute permission bits to ensure that data could not be directly executed. Since most buffer overflow attacks involve branching to data, denying execute permission to data is a very effective countermeasure. Unfortunately, many contemporary operating systems have not used the execute permission features of the x86 segmentation architecture until quite recently [17].Amazing. Three features that most modern OSes still do not have. Note that the "first and most important" feature is that executable code is tagged with the execute bit. While that is not pure separation of code and data, it is an excellent start because it does at least distinguish between with memory objects are code and which are just data. Hence, a char/string based buffer overflow could not result in a remote code execution scenario because the object would not have the execute bit flipped.
Second, Multics virtual addresses are segmented, and the layout of the ITS pointers is such that an overflow off the end of a segment does not carry into the segment number portion. The net result is that addressing off the end of a segment will always result in a fault, rather than referencing into some other segment. By contrast, other machines that have only paging (such as the VAX, SPARC, or MIPS processors) or that allow carries from the page offset into the segment number (such as the IBM System/370) do not protect as well against overflows. With only paging, a system has to use no-access guard pages that can catch some, but not all references off the end of the data structure. In the case of the x86 processors, although the necessary segmentation features are present, they are almost never used, except for operating systems specifically designed for security, such as GEMSOS [32].
Third, stacks on the Multics processors grew in the positive direction, rather than the negative direction. This meant that if you actually accomplished a buffer overflow, you would be overwriting unused stack frames, rather than your own return pointer, making exploitation much more difficult." [Boldface and italics are mine]
To be fair, some OSes have limited support for NX (No Execute) bits (which if you're smart you'll catch that to be "black lists" not "white lists" like Multics uses), but the implementation's quality and thoroughness is typically limited to support from the CPU architecture employed. From the list, though, it looks like a fair number of the 64 bit CPUs are well supported.
Windows Vista will support NX (along with ASLR), but there may be some caveats, such as the typical requirement of vendors to opt-in to use the feature. From AMD:
"For NX to be applied to a given executable, three conditions must be met. First, the processor has to support NX. As mentioned, any AMD64 processor will fill the bill here. Second, EVP support has to be on globally, which will be true for all 64-bit Vista builds and all 32-bit Vista builds unless NX support is explicitly switched off with a boot configuration option. And finally, Vista has to mark all of the program's non-code pages with the NX flag, which is determined by the operating environment (32-bit or 64-bit Vista), boot configuration options, and the program itself."Overall, that may make a compelling argument to switch to 64 bit architecture.
So, what's today's lesson? Either separate code from data or pay for it eternally as you penetrate and patch.
Thursday, September 20, 2007
Symantec Considers White Lists
After years of attempting to convince every Symantec SE I met to drop the Sisyphean virus signature database model, it appears that Symantec is finally seriously considering to use white lists.
Virus signatures, or lists of all the bad software that your AV vendor thinks you wouldn't want to run on your computer, are "black lists" (or "bad lists" for those of you, who like me, aren't in favor of even a nuance of color discrimination in language). For many people, security practitioners included, the thought of the converse model of "white lists" (or, again, "good lists") has not even entered their minds. In AV speaking terms, this would mean building an anti-malware (yes "malware" to encompass any of the garbage that you don't want to get CPU or memory resident on your systems) solution that allows only known good code to execute. "Why would anyone want to do that?" you might ask ... Well, because keeping up with all the bad things is Sisyphean (as in rolling a large boulder uphill only to have it fall back down on you several times).
Here's a quick graph to depict the rate of virus variant increases over the past couple decades, taken from F-Secure's blog:
At first glance, there are portions of that curve that look to be increasing quadratically. [Disclaimer: this chart is not meant to be thoroughly scientific, but more of a generalization to paint a picture in a quick blog post.] Ask your average enterprise IT administrator if the number of good, trustworthy applications is increasing on a scale similar to that. The answer should be not just "no" but "heck, no". If it was, there would be all other sorts of management issues, such as change/release/version control, and redundancy of similar applications. So, to put it simply, "white lists" in AV allow an organization to approach their malware problem at the same pace as they approach their "bonware" (beneware? goodware? niceware?).
Marcus Ranum has been saying this for years, too. Review his "6 Dumbest Ideas in Computer Security", starting with #1 (Default Permit) and #2 (Enumerating Badness) which are exactly this issue.
Virus signatures, or lists of all the bad software that your AV vendor thinks you wouldn't want to run on your computer, are "black lists" (or "bad lists" for those of you, who like me, aren't in favor of even a nuance of color discrimination in language). For many people, security practitioners included, the thought of the converse model of "white lists" (or, again, "good lists") has not even entered their minds. In AV speaking terms, this would mean building an anti-malware (yes "malware" to encompass any of the garbage that you don't want to get CPU or memory resident on your systems) solution that allows only known good code to execute. "Why would anyone want to do that?" you might ask ... Well, because keeping up with all the bad things is Sisyphean (as in rolling a large boulder uphill only to have it fall back down on you several times).
Here's a quick graph to depict the rate of virus variant increases over the past couple decades, taken from F-Secure's blog:
At first glance, there are portions of that curve that look to be increasing quadratically. [Disclaimer: this chart is not meant to be thoroughly scientific, but more of a generalization to paint a picture in a quick blog post.] Ask your average enterprise IT administrator if the number of good, trustworthy applications is increasing on a scale similar to that. The answer should be not just "no" but "heck, no". If it was, there would be all other sorts of management issues, such as change/release/version control, and redundancy of similar applications. So, to put it simply, "white lists" in AV allow an organization to approach their malware problem at the same pace as they approach their "bonware" (beneware? goodware? niceware?).
Marcus Ranum has been saying this for years, too. Review his "6 Dumbest Ideas in Computer Security", starting with #1 (Default Permit) and #2 (Enumerating Badness) which are exactly this issue.
Wednesday, September 19, 2007
Trust at the foundational levels: IOMMU & DMA
IOMMU, or Input Output Memory Management Unit, will likely play a large role in the security of future operating systems. If IOMMU does not play a large role, it will hopefully be because there is something better (that is to say, hopefully IOMMU is not neglected from future computer architectures). In a nutshell, IOMMU is like a mini-firewall for RAM (yes, there are problems with this over-simplification, but bear with me), controlling hardware's access to critical memory locations to prevent ignorance or malice from, say, using a DMA-connected device to patch a kernel at runtime.
DMA, or Direct Memory Access, was designed to rid computer architectures of the performance problems associated with sending all memory IO requests through the CPU. It's yet another instance of the convenience versus security trade-off.
Most OSes in use today are monolithic kernels, which when juxtaposed to their evolved cousins, microkernels, it becomes apparent the differences of trust models used within each. Monolithic kernels are like DMA, they make a convenience over security trade-off; system drivers and services run in kernel mode for convenient access to each other's data (which is scary if one of those is corrupted and taken over by an adversary, or adversary's malicious code). Microkernels, on the other hand, do not make that trade-off; all IPC (inter process communication) is routed the long way around while all drivers and services run in userland-- not privileged memory space. One example is MINIX, an academic OS with a microkernel design.
Microkernels and IOMMU go hand in hand, as both are attempting to address trust at the foundational levels, IOMMU at the hardware layer and Microkernels at the OS (control/allocation of hardware) layer.
Joanna Rutkowska is a security researcher who focuses on security issues closer to the hardware than most attacks, such as kernel fundamentals that may or may not lead to rootkits. What Joanna has pointed out, is not unpredictable to Operating System researchers. From Joanna's Blog:
DMA, or Direct Memory Access, was designed to rid computer architectures of the performance problems associated with sending all memory IO requests through the CPU. It's yet another instance of the convenience versus security trade-off.
Most OSes in use today are monolithic kernels, which when juxtaposed to their evolved cousins, microkernels, it becomes apparent the differences of trust models used within each. Monolithic kernels are like DMA, they make a convenience over security trade-off; system drivers and services run in kernel mode for convenient access to each other's data (which is scary if one of those is corrupted and taken over by an adversary, or adversary's malicious code). Microkernels, on the other hand, do not make that trade-off; all IPC (inter process communication) is routed the long way around while all drivers and services run in userland-- not privileged memory space. One example is MINIX, an academic OS with a microkernel design.
Microkernels and IOMMU go hand in hand, as both are attempting to address trust at the foundational levels, IOMMU at the hardware layer and Microkernels at the OS (control/allocation of hardware) layer.
Joanna Rutkowska is a security researcher who focuses on security issues closer to the hardware than most attacks, such as kernel fundamentals that may or may not lead to rootkits. What Joanna has pointed out, is not unpredictable to Operating System researchers. From Joanna's Blog:
I must say really like the design of MINIX3, which keeps all the drivers (and other system components) in usermode, in separated address spaces. This is, however, still problematic today, as without IOMMU we can’t really fully protect kernel from usermode drivers, because of the potential DMA attacks – i.e. a driver can setup a DMA write-transaction to overwrite some part of the micro kernel memory, thus owning the system completely. But I guess we will all have processors supporting IOMMU within the next 1-2 years.The important thing to note here, is not necessarily that the OS an enterprise chooses for critical computing should change to MINIX3 (although maybe it should), but rather: until there are proficient controls to ensure the trustworthiness of the foundational levels of computing, worrying about the trustworthiness of everything that runs on top of the foundation is pointless. To quote Schneier: "security is a chain; it's only as strong as its weakest link."
Wednesday, September 12, 2007
Se Cura: Free of Care or Worry
"Secure" derives from the Latin words se, meaning "without", and cura, meaning "care" or "worry". In simplest terms, being secure means not having to worry about security. Threats to well-being are just not of concern.
This is important to note as most people use the term "secure" to mean a variety of different things. There's some people who use "secure" interchangeably with the elements from the CIA Triad. There are some who mean that an asset is important and adequately protected. And there are those who even use "secure" to mean "comfortable", as in "I am secure with who I am". What's surprising is how close the last use really is to its etymological roots. There are even those who use "secure" to mean physically fastened to another object. Personally, I am most frustrated with people who use the word "secure" or "security" interchangeably with, say, Confidentiality, because it shows a flagrant ignorance into the bigger picture of what it takes for, say, users of a system to be comfortable that their information assets are truly safe. Clearly, more is needed than just Confidentiality, hence all of these models like the CIA Triad and Parker's Hexad. [For the record, I think both of those models are inaccurate, but that's for another time.]
This is important to note as most people use the term "secure" to mean a variety of different things. There's some people who use "secure" interchangeably with the elements from the CIA Triad. There are some who mean that an asset is important and adequately protected. And there are those who even use "secure" to mean "comfortable", as in "I am secure with who I am". What's surprising is how close the last use really is to its etymological roots. There are even those who use "secure" to mean physically fastened to another object. Personally, I am most frustrated with people who use the word "secure" or "security" interchangeably with, say, Confidentiality, because it shows a flagrant ignorance into the bigger picture of what it takes for, say, users of a system to be comfortable that their information assets are truly safe. Clearly, more is needed than just Confidentiality, hence all of these models like the CIA Triad and Parker's Hexad. [For the record, I think both of those models are inaccurate, but that's for another time.]
The Woes of TOR
I predicted this a year or so ago (when I first heard of TOR), but as predictions go, they don't have value if they aren't published. Now there are issues in the news about how TOR doesn't provide the security that users expected.
The following is a picture from the EFF's website depicting how TOR supposedly provides anonymity for users' web browsing:
Just like perpetual motion is impossible, so is this idea of anonymity. In simplest terms, it comes down to understanding how trust works. A TOR user trusts (an action) the TOR client on her computer which trusts the TOR nodes to properly route her data without breaching confidentiality. The important little detail that is so often overlooked is that these TOR nodes are operated by ... that's right ... people. And people will ruin a security model every time, either deliberately (malice) or accidentally (ignorance). [Hanlon's Razor comes to mind: "Never attribute to malice that which can be adequately explained by stupidity."]
In the case of the embassy's pitfalls with TOR, users assumed (watch that!) that their traffic was not being monitored by the TOR network nodes, when in fact it was. This is not a pitfall of TOR's implementation, but of TOR's design. Since this is an anonymous network of nodes operated by people such that Adam does not know Eve, how can any user ever expect a trustworthy (not an action, but a state of assurance) network? Compare this to the "real world" ... would Jane expect that if she were to go to some special coffee shop and share her secrets with a total stranger that the secrets would stay safe with her? Regardless of whether Jane has that expectation, it would be prudent for you not to have that expectation yourself.
So, extrapolating upon this situation ... how long will it be until we have law enforcement regularly participating in TOR networks? The logical next step is either an arms race inside the TOR network or a total breakdown of the network altogether. Either we will see law enforcement participate and "security researchers" (terms used loosely) evaluating methods to evade untrustworthy--yet totally anonymous--TOR nodes, OR, we will see law enforcement agencies lobbying their respective governments to make TOR illegal. From where I sit, this looks like the former is the better option. And why not? After all, there are millions of ignorant people out there who will assume that total anonymity is actually possible-- actually achievable. But it's not. That's dictated by the physical laws of information security. And while they're out there using TOR, law enforcement has a new "beat to walk": the TOR networks.
Another perpetual motion attempt in InfoSec today is DRM which will be discussed in the future.
The following is a picture from the EFF's website depicting how TOR supposedly provides anonymity for users' web browsing:
Just like perpetual motion is impossible, so is this idea of anonymity. In simplest terms, it comes down to understanding how trust works. A TOR user trusts (an action) the TOR client on her computer which trusts the TOR nodes to properly route her data without breaching confidentiality. The important little detail that is so often overlooked is that these TOR nodes are operated by ... that's right ... people. And people will ruin a security model every time, either deliberately (malice) or accidentally (ignorance). [Hanlon's Razor comes to mind: "Never attribute to malice that which can be adequately explained by stupidity."]
In the case of the embassy's pitfalls with TOR, users assumed (watch that!) that their traffic was not being monitored by the TOR network nodes, when in fact it was. This is not a pitfall of TOR's implementation, but of TOR's design. Since this is an anonymous network of nodes operated by people such that Adam does not know Eve, how can any user ever expect a trustworthy (not an action, but a state of assurance) network? Compare this to the "real world" ... would Jane expect that if she were to go to some special coffee shop and share her secrets with a total stranger that the secrets would stay safe with her? Regardless of whether Jane has that expectation, it would be prudent for you not to have that expectation yourself.
So, extrapolating upon this situation ... how long will it be until we have law enforcement regularly participating in TOR networks? The logical next step is either an arms race inside the TOR network or a total breakdown of the network altogether. Either we will see law enforcement participate and "security researchers" (terms used loosely) evaluating methods to evade untrustworthy--yet totally anonymous--TOR nodes, OR, we will see law enforcement agencies lobbying their respective governments to make TOR illegal. From where I sit, this looks like the former is the better option. And why not? After all, there are millions of ignorant people out there who will assume that total anonymity is actually possible-- actually achievable. But it's not. That's dictated by the physical laws of information security. And while they're out there using TOR, law enforcement has a new "beat to walk": the TOR networks.
Another perpetual motion attempt in InfoSec today is DRM which will be discussed in the future.
Wednesday, September 5, 2007
Separation of Code and Data
One of the most surprising things for a savvy information security practitioner is the continued prevalence of intermingling of code and data in modern information systems. It is as if some mandate dictates that these items must be desegregated. This is the root cause of every buffer overrun/overflow exploit, every XSS or SQL Injection attack, even Microsoft's infamous run of bugs in office documents and images (think GDI). Yet, today, there are simply too many systems that do not properly (if at all) segregate data objects from executable code objects in memory.
Buffer overruns
The gist is that a program allocates memory for a data object (as in "information" of some type), but because of some implementation bug, executable code can be passed to the function instead of data (because the program does not distinguish the two). And eventually for one reason or another, but mainly because the system does not require executable code to be stored in a location with better access control separate from data elements, the data object is unintentionally executed, frequently containing malicious payloads.
Cross Site Scripting
Most cases of XSS are no different. Essentially, data elements (usually expected to be in the form of plain, rich, or html text) are uploaded containing client-side executable scripts with malicious payloads.
SQL Injection
Again, same song, second verse, only this time the "data" is really malicious SQL commands uploaded as text.
Why, if the top technical causes of malware all share the commonality of not properly segregating code and data, do we not yet have systems that build this property in?
Excellent question, of course. The current-generation attempts to solve this include Address Space Layout Randomization (ASLR), and, more formidably, No Execute (NX). Windows Vista's downfall, of course (all debate on licensing aside) is that for backwards-compliance's sake old applications cannot be forced to segregate their memory objects. And the patterns of lazy or ignorant developers yet again plague both the consumer and the enterprise.
Perhaps one day we'll get this right.
Buffer overruns
The gist is that a program allocates memory for a data object (as in "information" of some type), but because of some implementation bug, executable code can be passed to the function instead of data (because the program does not distinguish the two). And eventually for one reason or another, but mainly because the system does not require executable code to be stored in a location with better access control separate from data elements, the data object is unintentionally executed, frequently containing malicious payloads.
Cross Site Scripting
Most cases of XSS are no different. Essentially, data elements (usually expected to be in the form of plain, rich, or html text) are uploaded containing client-side executable scripts with malicious payloads.
SQL Injection
Again, same song, second verse, only this time the "data" is really malicious SQL commands uploaded as text.
Why, if the top technical causes of malware all share the commonality of not properly segregating code and data, do we not yet have systems that build this property in?
Excellent question, of course. The current-generation attempts to solve this include Address Space Layout Randomization (ASLR), and, more formidably, No Execute (NX). Windows Vista's downfall, of course (all debate on licensing aside) is that for backwards-compliance's sake old applications cannot be forced to segregate their memory objects. And the patterns of lazy or ignorant developers yet again plague both the consumer and the enterprise.
Perhaps one day we'll get this right.
Subscribe to:
Posts (Atom)