Microsoft says there is no increase in security risk; however, experts say access to source code could make some steps easier for attackers.
Microsoft confirmed last week that attackers were able to view some of its source code, which it found during an ongoing investigation of the SolarWinds breach. While its threat-modeling approach mitigates the risk of viewing code, many questions remain that could determine the severity of this attack.
On 12-18-2020
SolarWinds on Monday disclosed that attackers had infiltrated its software build system and inserted malicious code into software updates that the company subsequently sent out to 33,000 organizations worldwide — about 18,000 of whom actually installed it. The company has said that updates it released between March and June 2020 were tainted.
|
|
In a blog post published on Dec. 31, 2020, officials said Microsoft has not found evidence of access to production services or customer data, nor has it discovered that its systems were used to attack other companies. The company has not found indications of common tactics, techniques, and procedures (TTPs) linked to abuse of forged SAML tokens against its corporate domains.
It did find an internal account had been used to view source code in “a number of code repositories,” according to the blog post, from the Microsoft Security Response Center (MSRC). This activity was unearthed when investigators noticed unusual activity with a small number of internal accounts, the post explains, and the affected account didn’t have permissions to change any code or engineering systems. The accounts were investigated and remediated, officials noted.
The news began to generate attention in the security community, and with good reason: Microsoft’s software is among the most widely deployed in the world, and organizations of all sizes rely on the company’s products and services. It’s an appealing target, in particular among advanced attackers like those behind the SolarWinds incident.
“It’s something they can’t access themselves, and there’s a lot of assumption that there’s super-secret things there that are going to compromise [their] security,” says Jake Williams, founder and president of Rendition Infosec, regarding why businesses might understandably panic at the news.
While it’s certainly concerning, and we don’t know the full extent of what attackers could see, Microsoft’s threat-modeling strategy assumes attackers already have some knowledge of its source code. This “inner source” approach adopts practices from open source software development and culture, and it doesn’t rely on the secrecy of source code for product security.
“There are a lot of software vendors, and security vendors, that rely on the secrecy of their code to ensure security of applications,” Williams explains. Microsoft made a big push for secure software development in Windows Vista. It didn’t make the decision to open source the code but designed it with the assumption that could possibly happen someday. Source code is viewable within Microsoft, and viewing the source code isn’t tied to heightened security risk.
“If the code is all publicly released, there should not be new vulnerabilities discovered purely because that occurs,” Williams adds.
Microsoft’s practice isn’t common; for most organizations, the process of adopting the same approach and revamping their existing code base is too much work. However, Microsoft is a big enough target, with people regularly reverse engineering its code, that it makes sense.
While attackers were only able to view the source code, and not edit or change it, this level of access could prove helpful with some things — for example, writing rootkits. Microsoft, which did not provide additional detail for this story beyond its blog post, has not confirmed which source code was accessed and how that particular source code could prove helpful to an attacker.
It’s one of many questions that remain following Microsoft’s update. What have the attackers already seen? Where was the affected code? Were the attackers able to access an account that allowed them to alter source code? There is still much we don’t know regarding this intrusion.
This “inner source” approach still creates risk, writes Andrew Fife, vice president of marketing at Cycode, in a blog post on the news. Modern applications include microservices, libraries, APIs, and SDKs that often require authentication to deliver a core service. It’s common for developers to write this data into source code with the assumption only insiders can see them.
“While Microsoft claims their ‘threat models assume that attackers have knowledge of source code,’ it would be far more reassuring if they directly addressed whether or not the breached code contained secrets,” he writes. In the same way source code is a software company’s IP, Fife adds, it can also be used to help reverse engineer and exploit an application.
This is an ongoing investigation, and we will continue to provide updates as they are known. In the meantime, Williams advises organizations to continue applying security patches as usual and stick with the infosec basics: review trust relationships, check your logging posture, and adopt the principles of least privilege and zero trust.
“Supply chain attacks are really difficult to defend against, and it really comes back to infosec foundations,” he says. “If your model of protecting against an attack is ‘give me an indicator of compromise and I will block that indicator,’ that’s ’90s thinking.”
Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial