success fail Jun JUL Oct 14 2006 2007 2008 46 captures 23 Jun 2005 - 08 Dec 2017 About this capture COLLECTED BY Organization: Internet Archive The Internet Archive discovers and captures web pages through many different web crawls. At any given time several distinct crawls are running, some for months, and some every day or longer. View the web archive through the Wayback Machine. Collection: nsdlweb this data is currently not publicly accessible. TIMESTAMPS
Schneier on Security
A blog covering security and security technology.
June 21, 2005
Underhanded C Contest
As far as I know, this is the only security-related programming contest: the Underhanded C Contest. The object is to write clear, readable C code with hidden malicious behavior; in other words, to hide evil stuff in code that passes visual inspection of source by other programmers.
This year's challenge: covert fingerprinting.
Just think of how much more secure the world was before integer overflows were discovered.
Even clever people will pick low hanging fruit.
By promoting cleverness you are making software engineering for the masses harder.
You might think that is a good thing, I don't.
Posted by: Sigh at June 21, 2005 01:25 PM
Sign: Except that attackers have been doing that. EG, look at the attempted Linux kernel backdoor (which only got caught because the patch addition didn't follow the chain of command), which had
if(some condition && process_uid = 0)
Which looks like error checking code but it is really "give this process uid0"
Posted by: Nicholas Weaver at June 21, 2005 01:42 PM
> By promoting cleverness you are
> making software engineering for
> the masses harder.
The difficulty involved in writing secure code is directly proportional to the value of its exploitation. So, practically speaking, the legitimate research and discovery of a novel technique either provides a means to enhance security, or it has no effect.
Posted by: BillW at June 21, 2005 02:09 PM
Hmmm...looks to be a tough challenge. I don't even where to start. Generating the tracking id seems hardest part. I can't think of a way to obtain a unique number without giving the game away. Calling a timer or random function is too obvious. Reading the PID or other parameters is too obvious as well.
Posted by: Chung Leong at June 21, 2005 02:23 PM
@Chung: Think about (re-)using some legitimate function. Like some hash value. Instead of an obvious timer, you could use last accessed date of the script itself. Misalign some buffer to get a value which is supposed to be another one. Play with marginal values (loop starting with 0 or 1 ? ending at N or at N+1?) and amplify this to misalign a buffer. Exploit some rather obscure API thing which is not obvious. Use variable names which are slightly off but pass as correct, etc.
Obviously, there is some social engineering involved, since if you really scrutinize a script, you will find ANY problem. So it's a task to fool a not-so-perfect supervisor. So, exploit common weaknesses of programmers, laziness, etc.
Posted by: MP at June 21, 2005 02:37 PM
"By promoting cleverness you are making software engineering for the masses harder."
You mean it's better to promote stupidity instead?
Seriously, I expect the contest to do more good than harm. Once a trick is widely known, its ability to fool a reviewer diminishes. Coders who want to hide malicious code in plain sight will be forced to keep inventing new tricks, making such mischief more and more difficult.
Posted by: Anonymous at June 21, 2005 03:13 PM
"By promoting cleverness you are making software engineering for the masses harder."
Luckily, the masses aren't usually required to do any software engineering at all...
Posted by: Ricardo Barreira at June 21, 2005 03:46 PM
As it's the Underhanded C Contest, I don't think scripting languages are permitted. But in any event, if an executable looks at its own access time, it would look rather suspicious. Obscure API calls would too.
All I can think of is to fstat the input file, with the stated intention of getting its size. Instead of the expected structure, pass a pointer to something smaller, so the atime would overflow into a unrelated variable, which can then be used as the unique id.
That sounds rather lame I must say.
Posted by: Chung Leong at June 21, 2005 04:31 PM
I like it. It gets the dirt in the open. The black hats know this stuff because it's their stock-in-trade, but for everyone else it's new. Once the common tricks are known, we can do something (compiler warnings, lint-like scanners, etc) , but without knowing what to look for we're doomed.
C is also a nice language for the competition. Other languages have abstractions where you can hide fragments of your malice, but with C you're almost naked. You can expect your macros to be viewed with great suspicion.
The real benefits of this will occur way down the line when the low hanging fruit have been dealt with and the serious problems are understood. Much of current language design is about making good code look good, and bad code look bad. The challenge will be to understand how to avoid providing language features which allow black hats to pass off bad code as (plausibly) good.
Posted by: Terry Browning at June 21, 2005 06:13 PM
While I disagree with Sigh's analysis I agree with his observation.
Security bugs have always been there but there are more classes of bug today than ever before - which probably would not have been discovered so soon if simple buffer overflows were still readily available.
This has raised the bar for well-designed software, but developers do have to learn an awful lot before they can program securely.
This is one reason why managed languages are inevitable for the majority of code.
Posted by: Polio at June 21, 2005 07:41 PM
Going to submit something, Bruce?
Posted by: Nick at June 21, 2005 10:35 PM
@Sigh, who wrote:
"Just think of how much more secure the world
was before integer overflows were discovered."
Well, no, you just *thought* it was safer.
Many security exploits were known even to white hat hackers, sometimes for years before they were understood by the vendor and fixed.
Today, defects tend to be discovered by white hats looking for the defects directly. Don't assume, however, that just because they were not discovered in a VXer worm building toolkit that hackers are unaware of them.
Any defect that can be discovered by a white hat by direct inspection of a years-old product like Windows 2000 might well have been discovered by a black hat years ago. Instead of feeding them to the script kiddies, the black hat are likely to keep such exploits in their private toolkit for their own purposes.
Every system rooted by worm this year could have been rooted by a cracker any time since Windows 2000 shipped. And some of them undoubtedly were.
You may want to read the article from the bottom up...
"Prize Since we're in Binghamton, NY, the prize will be a gift box from the nearby brewery Ommegang in Cooperstown, NY."
This is a contest after my own heart.
"Just think of how much more secure the world was before integer overflows were discovered."
*Integer* overflow? People have been aware of it about as long as electromechanical or electronic computers have been wtih us.
Posted by: Anonymous at June 22, 2005 05:34 AM
Having tried to instill C programming and an awareness of Operating System and Network Security into 2nd and 3rd year undergraduates I appreciate this challenge.
a) It gives them something real and practical to aim at, and
b) As has been said above, discovering the real weaknesses in C coding, will give these future demagogues an enhanced awareness of the challenges that they face or advantages they can persue depending on whether or not they embrace the dark side of the force!
Posted by: Ian Graham at June 22, 2005 08:30 AM
"Underhanded"? Surely they mean "underhand". Those tricky colonials'll be saying "transportation" instead of "transport" next!
Posted by: Rampo at June 22, 2005 08:36 AM
I think I have the unique id problem figured out. In the program we have three core functions:
In load_image(), we stat the input file to determine how large a buffer we need. That places the file's atime in a struct on the stack. In smooth_image(), we have a struct used by the image processing algorithm. We'd arrange the local variables so that this struct occupies the same memory space in the stack as the stat struct in load_image(). "By chance," we forget to initiatize the member variable where the atime was previously stored. That variable should be then unique everytime you run the program.
The question is how to embed that number into the image...
Posted by: Chung Leong at June 22, 2005 09:11 AM
I'd like to see the converse as well. Give students source code for 5 programs and tell them that at least one (possibly more than one) contains malicious behavior. See if the students can figure out the bad code. Not only is this a good exercise to teach them what to look for, it can also expose them to different coding styles and what to expect if they have to maintain someone else's code.
Posted by: JohnJ at June 22, 2005 09:21 AM
So where are the results from last year's competition?
The ability to maintain someone else's code is a different security problem, but a very real one. Software documentation and coding standards (including review) can minimise the problems of maintainability and the possibility of introducing malicious or accidental artifacts.
Posted by: Ian Graham at June 22, 2005 09:34 PM
Read a book ;)
Seriously in "Deep C Secrets" there was an interesting example of buffer allocation and de-allocation that allowed parts of the Unix password file to be put in Tar-Balls on Sol 2.5.
Having read about it, I used it to write an exploit to leak the key in a stream cipher system (encrypted with a public key), at a place I worked at back in the last century ;)
Having got it through all the company code review process and at the point the product was in beta testing I wrote it up as a paper for the software manager...
I did it to demonstrate a real problem that had occured, the company was using unchecked self employed "coder cutters" to write security software, and it was not properly checking what they where producing (other than some cursory functional testing).
The reason I thought it had to be demonstrated I had picked up on one (quite poor) attempt to backdoor the software and managment had ignored the issue (because I had picked it up...).
My point was and still is that even expert code reviewers will miss the subtle methods used by experts to leak information using high level languages.
If you do not do a proper code evaluation/review on security software at the processor level then you will not know exactly what the code does, and therfore accidental or deliberate leaks will happen.
The nice thing about this competition is that it will encorage people to think about these things and know what to look out for...
Posted by: Clive Robinson at June 23, 2005 03:39 AM
Get fun With C.
Posted by: saumya at March 23, 2006 08:43 AM
Post a commentPowered by Movable Type 3.2. Photo at top by Steve Woit.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT Counterpane.Blog Menu
- Essay on Fear
- Comparing Rare Risks
- Risks of Data Reuse
- Designing Voting Machines to Minimize Coercion
- America's Newfound Love of Secrecy
- Credit Card Gas Limits
- Surveillance Cameras that Obscure Faces
- 4th Amendment Rights Extended to E-Mail
- Cell Phone Stalking
- Cocktail Condoms
- June 2007
- May 2007
- April 2007
- March 2007
- February 2007
- January 2007
- December 2006
- November 2006
- October 2006
- September 2006
- August 2006
- July 2006
- June 2006
- May 2006
- April 2006
- March 2006
- February 2006
- January 2006
- December 2005
- November 2005
- October 2005
- September 2005
- August 2005
- July 2005
- June 2005
- May 2005
- April 2005
- March 2005
- February 2005
- January 2005
- December 2004
- November 2004
- October 2004
RSS 2.0 (excerpts) Crypto-Gram Newsletter If you prefer to receive Bruce Schneier's comments on security as a monthly e-mail digest, subscribe to Schneier on Security's sister publication, Crypto-Gram.