Skip to Main Content
Idaho State University home

INFO 5584 Secure Software Life Cycle Development 3 credits

In today's interconnected world, security must be included within each phase of the software lifecycle. This course contains the largest, most comprehensive collection of best practices, policies, and procedures, to ensure a security initiative across all phases of application development, regardless of methodology. Specific, evaluated graduate-level activities and/or performances are identified in the course syllabus.  Prerequisites: INFO 4411, INFO 4403, INFO 4407 D or instructor Permission

 

Course Iniformation:

Creating Secure Software

In the book, Hacking: the Art of Exploitation, author Jon Erickson accurately and succinctly expresses that an understanding of writing code helps those who exploit it, and an understanding of exploitation helps those who write code. A software developer, in other words, must first know how their code can be exploited (insecure code) and then, in turn, use that knowledge to write code that is not exploitable (code in security). Just as when attempting to cure a malady, a physician needs to first diagnose the core issue before treating the symptoms, when developing hack-resilient software, one must first understand what constitutes insecure code, before attempting to address vulnerabilities.

Introducing Code (In)Security

It must be stated from the outset, however, that software security is more than writing secure code. In today’s security landscape, considerations must go beyond mere functionality to take into account security as well. There are several noteworthy resources on software security; some of those worth mentioning are the Hacking Exposed series, 19 Deadly Sins of Software Security, Exploiting Software, Building Secure Software, and Writing Secure Code. But even though these resources are considered must-reads for any software developer, evidence from sources such as the Chronology of Data Breaches and security bug- and full disclosure-lists show that software applications produced today are still rife with vulnerabilities. Security breaches are merely the symptoms of insecure design and programming, and unless software developers are trained to architect secure software and identify what constitutes insecure code, the trend of software rampant with security weaknesses is likely to continue.
There is a lingering debate about who is ultimately responsible for software insecurity. Opinions vary. Is it the software publishers, or the organization as a whole? Some feel that the blame should rest with the coder. But without proper education being imparted to software coders as to how to write ‘secure’ code or how to not write ‘insecure’ code, it is unreasonable to put the blame totally on them.
It is the opinion of this author that software insecurity must be attributable to all stakeholders in the software development lifecycle, and software developers, who write code, can play a vital role in the development of secure software.

Insecure Code

As the Chinese adage goes, a journey of a thousand miles begins with a single step. The journey to develop secure software begins with the first step of identifying what makes up insecure code.
So what is insecure code? Insecure code is code which is vulnerable to security attacks. For the benefit of the reader, the word “insecure” itself may be used as an acrostic, to describe programming constructs and code that are vulnerable to security breaches. The following is by no means an all-inclusive list of everything that constitutes insecure code, but is a compilation of the most prevalent programming constructs that have been observed to render software insecure.

Secure Software Concepts
Outlines the concepts and methodologies that allow all stakeholders, from operations personnel to architects, involved in the software development lifecycle (SDLC) to execute best application security practices. For managers, it lays out the mechanisms that allow for the directing or restraining of what actions the program and program user can perform. In regards to design and architecture, the mechanisms point out the considerations necessary to produce hack-resilient software.
A. Confidentiality, Integrity, Availability
B. Authentication, Authorization, and Auditing
C. Security Design Principles
 • Complete Mediation
 • Defense in Depth
 • Economy of Mechanism
 • Fail Safe
 • Least Common Mechanism
 • Least Privilege
 • Leveraging Existing Components
 • Open Design
 • Psychological Acceptability
 • Separation of Duties
 • Weakest Link
D. Risk Management (e.g., vulnerabilities, threats, and controls)
E. Regulations, Privacy, and Compliance
F. Software Architecture (e.g., layers)

I. Secure Software Requirements
This section covers the security and controls used to integrate security with the requirements gathering phase of the Software Development Lifecycle (SDLC). Learners will identify key security objectives and maximize software security while minimizing disruption to plans and schedules. The security and control concepts include policy decomposition, identification and gathering of data for classification and use and abuse cases.
A. Policy Decomposition 
 • Confidentiality, Integrity, Availability Requirements
 • Authentication, Authorization and Auditing Requirements
 • Internal and External Requirements
B. Identification and Gathering
 • Data Classification
 • Use Cases
 • Abuse Cases (inside and outside adversaries)

II. Secure Software Design
The design process defines secures software structure including the documentation of software attack surface elements, the threat modeling process, and the creation of security criteria for software release
To define secure software structure, the learner identifies the software components whose design and correct functioning are essential to security. Once defined, the learner will identify how the design team can then create default security metrics through the analysis of the attack surface elements and put into place processes to detect vulnerabilities and mitigate security risks through threat modeling. Both attack surface analysis and threat modelling are essential to secure software design and understanding these critical processes.
A. Design Processes
 • Attack surface evaluation
 • Threat modelling (e.g., APT, insider threat, common malware, third party/supplier)
 • Control identification and prioritization
 • Documentation
 • Design and architecture technical review (e.g., reviewing interface points and deployment diagram, walk-throughs to verify requirements)
 • Risk Assessment for Code Reuse
B. Design Considerations
 • Application of Methods to Address Core Security Concepts
 • Security Design Principles
 • Interconnectivity
 • Interfaces (e.g., security management interfaces, out-of-band management, log interfaces)
C. Securing Commonly Used Architecture
 • Distributed computing (e.g., client server, peer-to-peer, message queuing)
 • Service-oriented architecture (e.g., enterprise service bus, web services)
 • Rich Internet applications (e.g., client side exploits or threats, remote code execution, constant connectivity)
 • Pervasive/Ubiquitous computing (e.g., wireless, location-based, RFID, near field communication, sensor networks)
 • Integration with existing architectures
 • Cloud Architectures (e.g., software as a service, platform as a service, infrastructure as a service)
 • Mobile applications
D. Technologies
 • Authentication and Identity Management
 • Credential management (e.g., X.509 and SSO)
 • Flow control (e.g., proxies, firewalls, middleware, message queuing)
 • Logging (e.g., application event logs, syslog)
 • Data Loss Prevention (DLP)
 • Virtualization
 • Digital Rights Management (DRM)
 • Trusted Computing (e.g., TPM, TCB, malware, code signing)
 • Database security (e.g., encryption, triggers, views, privilege management)
 • Programming Language Environment (e.g., CLR, JVM, compiler switches, sandboxing)
 • Operating Systems
 • Embedded systems (e.g., control systems, firmware)

III. Secure Software Implementation/Coding
The Characteristics of Insecure Code:
 I - Injectable   
 N - Non-repudiation Mechanisms Missing  
 S - Spoofable   
 E - Exceptions and Errors Mishandled  
 C - Crytographic Weaknesses  
 U - Unsafe/Unused Functions and Routines  
 R - Reversible  
 E - Elevated Privileges Required to Run

To implement secure code, the learner will understand the coding standards that developers must follow to avoid introducing flaws that lead to security vulnerabilities. The testing standards and best practices introduced help to ensure that testing focuses on detecting potential security vulnerabilities rather than concentrating only on correct operation of software functions and features. The learner is expected to know common software vulnerabilities and countermeasures, and security testing tools that maximize the detection of errors that lead to those vulnerabilities. The course will explore security issues encountered in exception management, configuration management, build environment and interface coding, and be familiar with code reviews.
A. Declarative versus Imperative (Programmatic) Security
B. Vulnerability Databases/Lists (e.g., OWASP Top 10, CWE)
C. Defensive Coding Practices and Controls
 • Concurrency
 • Configuration
 • Configuration Parameter Management (e.g., start-up variables, cryptographic agility)
 • Cryptography
 • Error Handling
 • Exception management
 • Input Validation
 • Logging & Auditing
 • Memory Management (e.g., locality, garbage collection)
 • Output Sanitization (e.g., Encoding)
 • Safe APIs
 • Sandboxing
 • Session Management
 • Tokenizing
 • Type Safety
D. Source Code and Versioning
E. Development and Build environment (e.g., build tools, automatic build script)
F. Code/Peer Review
G. Code Analysis (e.g., static, dynamic)
H. Anti-tampering Techniques (e.g., code signing, obfuscation)

IV. Secure Software Testing
This covers the beta testing phase in the secure software development lifecycle. When software is functionally complete, thorough testing must be performed to determine if the final software meets security requirements. Secure code designers should be familiar with software quality assurance standards and know the impact assessment and corrective action procedures when security issues are discovered.
The learner is expected to understand the concepts of functional and security testing, interoperability testing, bug tracking, and the testing of high priority code (code that is part of the "attack surface"). Contents includes penetration testing, fuzzing, scanning, simulation testing, testing for failure and cryptographic validation and the methods and processes to make sure that bug fixes will not regress the baseline releases of the code.
A. Testing Artifacts (e.g., strategies, plans, cases)
B. Testing for Security and Quality Assurance
 • Functional Testing (e.g., logic)
 • Nonfunctional Testing (e.g., reliability, performance, scalability)
 • Security Testing (e.g., white box and black box)
 • Environment (e.g., interoperability, test harness)
 • Bug tracking (e.g., defects, errors and vulnerabilities)
 • Attack surface validation
 • Standards (e.g., ISO, OSSTMM, SEI)
C. Types of Testing
 • Penetration
 • Fuzzing (e.g., generated, mutated)
 • Scanning (e.g., vulnerability, content, privacy)
 • Simulation (e.g., environment and data)
 • Failure (e.g., fault injection, stress testing, break testing)
 • Cryptographic validation (e.g., PRNG)
 • Regression
 • Continuous (e.g., synthetic transactions)
D. Impact Assessment and Corrective Action
E. Test Data Lifecycle Management (e.g., privacy, dummy data, referential integrity)

V. Software Acceptance
The learner determines whether or not the software is ready to deliver to customers from a security viewpoint. Acceptance provides an overall picture of the security posture of the software and the likelihood that it will be able to withstand attack after release. The learner is expected to understand the various methods for determining completion criteria, risk acceptance and documentation (e.g., Disaster Recovery and Business Continuity Plans), Common Criteria and methods of independent testing
A. Pre-release and pre-deployment
 • Completion Criteria (e.g., documentation, DRP, BCP)
 • Risk Acceptance (e.g., exception policy, sign-off)
B. Post-release
 • Validation and Verification (e.g., FIPS, common criteria)
 • Independent Testing (e.g., third party)

VI. Software Deployment, Operations, Maintenance and Disposal
The learner will be able to use the information provided in vulnerability reports to detect and eliminate further vulnerabilities after they are discovered in software that has already shipped. It covers how to address new attacks that were unknown at the time software was shipped. The learner will be able to evaluate vulnerability reports and release security advisories and updates when appropriate. The learner will be able to conduct a post-mortem of reported vulnerabilities and take action as necessary. Familiar with the procedures and security measures that must be taken when a product reaches its end of life.
A. Installation and Deployment
 • Bootstrapping (e.g., key generation, access, management)
 • Configuration Management (e.g., elevated privileges, hardening, platform change)
 • Release Management (e.g., version control)
B. Operations and Maintenance
 • Monitoring (e.g., metrics, audits, SLA)
 • Incident Management
 • Problem Management (e.g., root cause analysis, vulnerability tracking, user support)
 • Change Management (e.g., patching)
 • Backup, Recovery and Archiving (e.g., retention cycles)
C. Software Disposal (e.g., retirement, end of life policies, decommissioning)

VII. Supply Chain and Software Acquisition
This section provides a holistic outline of the knowledge and tasks required to manage risk for the outsourced development, acquisition, and procurement of software and related services (e.g., Cloud Computing, Mobile Application development). This domain defines what the expectations of an organization should be when acquiring software to assure third-party products will not act maliciously, whether intended or not, nor disrupt the organization's business and result in negative financial impact.
The learner applies their accumulated knowledge of the Secure Software Development Life Cycle (SDLC) to evaluate suppliers and communicate with them on security issues, including vulnerability management, service level agreement monitoring, and chain of custody throughout the source code development and maintenance life cycle. The successful learner will understand the legalities surrounding the use and reuse of open source libraries and the security vulnerabilities that may or may not exist in the code.
A. Supplier Risk Assessment (e.g., managing the enterprise risk of outsourcing)
 • Risk Assessment for Code Reuse
 • Intellectual Property (e.g., Open Source License, Closed Source License, Third Party Proprietary)
 • Legal Compliance
 • Supplier Pre-Qualification (e.g., assessment of software engineering/SDLC approaches, information systems security policy compliance)
B. Supplier Sourcing
 • Contractual integrity controls (e.g., audit of security policy compliance, vulnerability/incident response)
 • Vendor technical integrity controls for third-party suppliers (e.g. secure transfer, system sharing/interconnections, secure storage, code exchange)
 • Managed Services (e.g., cloud, outsourcing)
 • Service-Level Agreements (SLA's) (e.g., monitoring plans, KPIs, performance metrics, targets)
C. Software Development and Test
 • Technical Controls (e.g., code repository security, build environment security)
 • Code Testing and Verification (e.g., backdoor detection, embedded malware detection)
 • Security Testing Controls (e.g., peer review, secure code review)
 • Software Requirements Verification and Validation
D. 8.D. Software Delivery, Operations and Maintenance
 • Chain of Custody (e.g., each change and transfer made during the source codes lifetime is authorized, transparent and verifiable).
 • Publishing and dissemination controls (e.g., code signing, delivery, transfer, tamper resistance)
 • Systems-of-Systems integration (e.g., security testing and analysis)
 • Software Authenticity and Integrity (e.g., cryptographically hashed, digitally signed components, software integrity is verified at run-time)
 • Product deployment and sustainment controls (e.g., upgrades, secure configuration, custom code extension, operational readiness)
 • Monitoring and Incident Management (e.g., supplier, components, SLAs, IDS/IPS)
 • Vulnerability Management, Tracking and Resolution (e.g., patching)
E. 8.E. Supplier Transitioning (e.g., code escrow, data exports, contracts, disclosure)



 

NIATEC National Science Foundation Information Assurance Directorate Department of Homeland Security CISSE Scholarship For Service