High-Profile Code Theft a Lesson in Data Loss Prevention

This past summer, the FBI arrested Sergey Aleynikov, a former Goldman Sachs computer programmer accused of stealing computer code that Goldman Sachs used to perform proprietary trading. According to an affidavit filed by the FBI, Aleynikov copied "proprietary trade code" from his company and uploaded it to a Web site in Germany. He later quit his job at the New York firm and moved to a new company in Chicago that "intended to engage in high-volume automated trading." This firm paid Aleynikov around three times his old salary of $400,000.

David Etue, vice president of product development at Fidelity Security Systems, said this is a teachable moment for companies that need to prevent data leakage on multi-gigabit-speed networks.

"The reason I find this case so interesting is Goldman actually found this and called the FBI," Etue said, emphasizing that Goldman is not a customer or competitor of Fidelity. "Unfortunately, they didn't stop it from happening, but they detected that it happened and went and did something about it. That's a big difference over the other data breaches that we've heard about. Typically, it's someone detected fraud in a credit card or someone posting that they did it."

According to Etue, it's rare that an organization that has been breached detects this violation itself, and this indicates Goldman has "what appears to be a very good security posture." However, he feels the ultimate failure to stop the data loss indicates where the data loss prevention industry is lagging.

"It's telling of the fact that most of our security controls to date have been around who has accessed information, not on what they do with it once they have access," he said. "Goldman happened to notice a large file transfer to Germany that they did not expect. They didn't know that it was source code or that it or that it was encrypted at the time; they just happened to notice."

Etue said that the security industry to date has spent a lot of time and money on ensuring proper information access management, asking questions such as: Is it the right user? Do they have access to the right information? Is the system configured in a way that people can't subvert those controls and get to information they shouldn't?

"But there's very little control of how information flows when one has rightful access to it," he said.

The Aleynikov story illustrates that the security threats companies see aren't limited to attack from outside; they also can come from within.

"The insider threat is real, and it's a lot more than just identity information; in this case, [it's] incredibly high value source code," Etue said. "There's been a lot of discussion in the data leakage space that data leakage is about discovering broken systems processes, [such as] accounting is sending this payroll file unencrypted. It's finding places where you need to deploy encryption technologies or better train users [on] how to handle data. It hasn't been about actually stopping people from doing things they shouldn't."

Companies can accomplish this by first determining what information they possess is sensitive, and then making clear their policies on how and what employees are allowed to communicate. Etue recommends categorizing these communications in three areas: official communications, such as work-based e-mails and file transfers to business partners; unofficial, rogue communications, such as anonymizing Web proxies to get around security controls and using peer-to-peer networking; and finally the gray area where these two forms of communication are combined dangerously.

For example, "corporations may allow their employees access to Web mail [for morale purposes], but you shouldn't be using, say, Yahoo Mail to send sensitive corporate information," Etue said. "Or we might use Facebook for recruiting, but it shouldn't be used for collaborating on an engineering design. You often allow people to communicate in casual ways without a control on what people can communicate inside of it. So the challenge is defining what information can be shared with whom and what are the important mechanisms for communicating it."

The technical side of the challenge on large multi-gigabit-speed networks is being able to keep up with network traffic. In order to be able to actually stop data from leaving, organizations need to be able to see inside of readable content.

"If you took a PowerPoint presentation or Word document and zipped it up, we need to see in that zipped document in real time to decide to do something about it," Etue said. "If we wait until after it's gone to look inside it, it's only a detection system."

Etue emphasized that this approach needs to be scalable. "Doing that for one session is obviously a lot easier than doing that for tens of thousands of concurrent sessions," he said. "Then, if you actually want to be able to stop [data leakage, you need to be] able to interject fast enough to prevent something from occurring."

This need must be addressed from an architectural perspective in the design of a given system.

Like what you see? Share it.Share on Google+Share on LinkedInShare on FacebookShare on RedditTweet about this on TwitterEmail this to someone
cmadmin

ABOUT THE AUTHOR

Posted in Tech Know|

Comment: