Strengthening the Foundation: Application Security within Product Security

Strengthening the Foundation: Application Security within Product Security

I’m a firm believer in the power of application security. So much so that I’ve written a book, blogs, and articles about it. I’ve spoken at conferences, on podcasts, and even have a YouTube channel with content related to AppSec. I teach it at university and online. I’ve built AppSec teams. I’ve sort of built my career around it. But if you asked me to define it today, compared to 10 years ago, I’m likely to give you a different answer.

While AppSec is a crucial component within the broader discipline of product security, it’s often misunderstood and left to fight for the scraps in an overall security budget that is fighting for scraps in an IT budget. In other words, much of the work in application security depends on changing the hearts and minds of the engineering and development community in an organization primarily because that effort is cheap.

So, what is AppSec’s purpose?

Like me, if you ask a few people they’ll likely give you varying definitions of what AppSec is. However, generally speaking, AppSec focuses on confirming that software applications are designed, developed, and maintained to protect against vulnerabilities and threats. This means delivering secure software through a well-defined secure software development life cycle (SSDLC) or through the practices of secure DevOps (DevSecOps).

Boiling this down to its simplest parts means that an organization puts the effort to ensure that the software that they offer to their customers is free from security vulnerabilities and protects against the loss of confidentiality, integrity, and availability due to threats taking advantage of weaknesses in the software. That’s a mouthful, but it essentially means, don’t release insecure software to your customers. Sounds simple, but we wouldn’t have a whole discipline in security if it was.

SSDLC or DevSecOps

A Secure SDLC (Software Development Life Cycle) is a framework that incorporates security at every phase of the software development process. It is designed to validate that security considerations are integrated from the initial enhancement or feature request all the way through development, deployment, and maintenance. The goal is to minimize vulnerabilities and improve the overall security of the software. Notice I said “minimize”. Security is about balancing risk. There is a risk in delivering insecure code, but there is a business risk in missing deadlines. This approach of addressing security early in the development process helps in identifying and mitigating security risks early. In theory a strong SSDLC will reduce the cost and complexity of addressing security issues after deployment (i.e. in production).

Security is about balancing risk. There is a risk in delivering insecure code, but there is a business risk in missing deadlines.

DevSecOps is a little different as it is an approach to culture, automation, and platform design that integrates security as a shared responsibility throughout the entire development life cycle. It bridges traditional gaps between development and security (like what DevOps has done for development and operations) while providing fast and secure delivery of code. In a DevSecOps model, security and development teams collaborate closely from the beginning of a project, using automated tools to monitor and resolve security issues in real time, thereby increasing the opportunities to integrate security. This methodology helps organizations to deploy secure software faster and more efficiently.

There is no “right” way to approach software security with the exception to say that the approach needs to match the software engineering organization. Most modern organizations today are following a DevOps model where they have automation, tools, and processes in place to deliver code in a rapid way. A traditional SSDLC often relies on gates to confirm that code passes the sniff test before proceeding. Think “waterfall-ish”. These are blanket statements, and if you looked under the hood of most development organizations, you’ll see a mish-mash of both approaches.

The practice of AppSec

Whether you are working in DevSecOps or SSDLC (or both) the overall people, processes, and technology that are leveraged define the AppSec program. I’m fond of saying that I can create an AppSec program dozens of different ways, and none of those rely on specific tools or practices. However, there are table stakes when it comes to AppSec, these tools are ones you are likely to see in most AppSec programs:

  • Static Application Security Testing (SAST): Analyzes source code for vulnerabilities without executing it.

  • Dynamic Application Security Testing (DAST): Tests the application while it's running to find runtime vulnerabilities.

  • Interactive Application Security Testing (IAST): Combines elements of SAST and DAST by testing applications from within using agents.

  • Software Composition Analysis (SCA): Identifies known vulnerabilities in third-party components and libraries.

  • Penetration testing: Simulated cyberattacks to find exploitable vulnerabilities.

  • Web Application Firewall (WAF): Protects web apps by filtering and monitoring HTTP traffic between a web application and the Internet.

  • Secure development training: Security training creates a culture of security, making safety considerations a fundamental part of development.

There are other, perhaps less common, tools and practices that can be found in AppSec programs such as:

  • Runtime Application Self-Protection (RASP): Protects applications from within by detecting and preventing real-time attacks. It integrates with the application, identifying and blocking threats as they occur.

  • Bug bounty programs: Incentivize individuals to find and report bugs or vulnerabilities in software, offering rewards for these discoveries. It's a way to leverage the skills of external cybersecurity experts.

  • Application Security Posture Management (ASPM): Involves continuously identifying, assessing, and improving the security posture of an application throughout its life cycle. It validates that applications remain secure against evolving threats.

  • Threat modeling: Threat modeling is a process for identifying and prioritizing potential threats to a system and devising countermeasures to prevent, or mitigate the impact of, these threats.

This is not a definitive list of tools and processes in AppSec, as more are coming on the scene every year with new ways to tackle old problems. However, the fundamentals of AppSec still mean finding vulnerabilities early in the development life cycle and remediating them. Many of the above tools will allow the development organization to identify weaknesses either in the early stages (such as threat modeling, SAST, and SCA) or protect the application in a running environment (such as WAF, RASP, and ASPM).

How does AppSec fit in Product Security?

Now that you understand all the ins-and-outs of application security (I’m kidding, I could create a whole series just on the topic) you can start to see how it can play in the broader product security space. I stated in my previous episode on product security that:

By focusing on ProdSec, we are prioritizing the security of the entire software product that integrate numerous applications, systems, services, environments, and other elements. This approach not only aligns with current development practices but also addresses the comprehensive needs and expectations of users and organizations, ensuring robust protection throughout the product lifecycle. (The Secure Product Lifecycle)

Everything we operate that we deem “smart” has software in it. The phone in our pocket, the MRI machine in the hospital, the car we drive, the rocket that launches the satellite, the satellite, the point-of-sale machine at the restaurant, the laptop and editor that I’m using to write this, the site I host it on. You get the point. We’re beyond the days when software was purpose built and left to narrow use cases. Today, software is as ubiquitous as ever.

This creates a large attack surface when the devices that surround us are connected to a network (more on this in upcoming episodes) allowing attackers to enter from a distance to take advantage of weaknesses in the software running on said device. Developers are under increasing pressure to release features, often leaving known and unknown vulnerabilities for resolution at a future time. Those vulnerabilities become windows into the broader ecosystem that software is connected to.

While the stories of breaches, data loss, ransomware, and outages abound, one notable story was the 23andMe data breach in late 2023. 23andMe, a genetics testing company, experienced an attack, leading to unauthorized access to customer profile information. The incident, advertised on BreachForums by a threat actor named "Golem," involved the sale of DNA profiles and corresponding email addresses, with prices ranging up to $100,000 for 100,000 profiles. The breach was attributed to compromised customer login credentials, not a direct system intrusion. Over 6 million individuals' information was accessed, including sensitive data related to ancestry. 23andMe highlighted the lack of multi-factor authentication (MFA) use by some customers as a contributing factor.

Kind of not great to blame your users, but that’s the world we’re in. The attackers used credential stuffing to compromise accounts and then use a feature in 23andMe that allows users to share their information within the 23andMe network.

Credential stuffing is a cyber-attack method where attackers use leaked or stolen usernames and passwords from one breach to gain unauthorized access to accounts on other platforms. This technique exploits the common practice of reusing the same login credentials across multiple services.

While users could take some of the blame here for not enabling multi-factor authentication (MFA), 23andMe could have made MFA default given the sensitive nature of the data they retain. This MFA-by-default is similar to most bank sites today or other critical accounts you may have. In terms of product security, this breach shows that the connected nature of a product (in this case the 23andMe service) and its ability to have its users compromised through data that was leveraged from a different and historical attack. Once compromised, given the features in 23andMe, additional information was able to be gathered.

While this is just one story, the news is riddled daily with new and sometimes novel techniques for compromising products. Although, to be honest, many of the attacks are done the boring old fashion way of compromising credentials. What could 23andMe done differently? None of the tools in the AppSec tool chest would have been a silver bullet. In fact, SAST, DAST, or SCA would not produce a report showing the compromise. However, threat modeling, penetration testing, or a bug bounty would likely have raised this up as a missing control. Run-time tools like RASP or WAF may have seen the stuffing attacks happening in real-time and generated alerts. This is to say, and this is a common theme in security, that a defense in depth approach is almost always a winning strategy.

But also: some lessons are learned the hard way.

Derek Fisher This is a great article and describes the aspects of Product Security across the TPLC in simple terms. I had a recent conversation and the statement was made that “Product Security’s (and the people involved) business value is implementing security by design”. Do you have any other thoughts on this?

Like
Reply
Phil S.

SDLC Visibility | Compliance | Secrets Scanning

3mo

Great read Derek!!

Like
Reply
David Matousek

Engineering and Product Leader | Aligning Technology and Security outcomes to business objectives | Defining emerging tech frameworks to create value

3mo

Awesome description of how AppSec fits into Product Security. I think you fit 10 articles of information into a single, understandable, and concise article! Thanks, Derek Fisher!

To view or add a comment, sign in

Explore topics