What is the one item that could be labeled as the “most wanted” item in coding security?

An IDS will only alert if the traffic is blocked.

Do I Need IDS If I Have IPS?

An IDS and anIPS are different things. The technology used to detect security problems in an IDS is very similar to the technology used to prevent security problems in anIPS.

Do IDS And IPS Work Together?

The network security solution is provided by IDS andIPS. An IDS needs help from other networking devices to respond to an attack. In the data stream, anIPS works to protect against malicious attacks.

What Does An IPS Do?

Network security that works to detect and prevent identified threats is called an intrusion prevention system. Your network is continuously monitored by intrusion prevention systems, which look for possible malicious incidents.

What Is An Advantage Of A Network Based IDS?

NIDS identify and prevent security threats. The network performance is unaffected by the deployment of NIDS. Passive devices that listen on a network are called NIDS.

What Is The One Item That Could Be Labeled As The Most Wanted Item In Coding Security?

There are cards.

Why Is IPS Better Than IDS?

The main difference between them is that IDS is a monitoring system. IDS does not alter the network packets in any way, whereasIPS prevents the packet from delivery based on the contents of the packet.

Is IPS Monitor Necessary?

The higher the price, the higher the display quality and the better the viewing angles. One of the top reasons you wouldn’t want a panel on your phone is the viewing angles. The displays look better inIPS.

Why IDS Is Mentioned In The Method Of IPS?

It’s an extension of IDS. The network is protected from intrusion by dropping the packet, denying entry to the packet or blocking the connection. Insp is an extension of IDS and is used to monitor network traffic for malicious activities.

Is Snort An IDS Or IPS?

Real-time network traffic analysis and data packet logging are provided by the SNORT system.

Can IPS Detect Malware?

The same kind of malicious activity and policy violation that an IDS does, and can additionally respond in real time to stop immediate threats, can be detected byIPS security.

What Is IDs And IPS?

The network infrastructure includes intrusion detection systems and intrusion prevention systems.

What Are The Types Of IDS?

There are different ways in which intrusion detection systems are classified. Active and passive IDS, Network Intrusion detection systems, and host Intrusion detection systems are the major classifications.

What Are Signature Based IDS?

The type of attack can be determined by it.

False positives are not produced by it.

It’s easy for a normal user to monitor it.

What Does IDs Do?

An intrusion detection system is a device or software application that monitors a network for malicious activity.

Presentation on theme: "Secure Software Development"— Presentation transcript:

1 Secure Software Development
Chapter 18

2 Objectives Describe how secure coding can be incorporated into the software development process. List the major types of coding errors and their root causes. Describe good software development practices and explain how they impact application security. Describe how using a software development process enforces security inclusion in a project. Learn about application-hardening techniques.

3 Key Terms (1 of 2) Agile model Black-box testing Buffer overflow
Canonicalization error Code injection Common Vulnerabilities and Exposures (CVE) Common Weakness Enumeration (CWE) Cryptographically random CWE/SANS Top 25 Most Dangerous Software Errors Dead code Deprecated function DevOps Evolutionary model Agile model – A software development mode built around the idea of many small iterations that continually yield a “finished” product at the completion of each iteration. Black-box testing – A form of testing where the tester has no knowledge of the inner workings of a mechanism. Buffer overflow – A specific type of software coding error that enables user input to overflow the allocated storage area and corrupt a running program. Canonicalization error – Arise from the fact that inputs to a web application may be processed by multiple applications, such as the web server, application server, and database server, each with its own parsers to resolve appropriate canonicalization issues. Code injection – An attack where unauthorized executable code is injected via an interface in an attempt to get it to run on a system. Common Vulnerabilities and Exposures (CVE) – A structured language (XML) schema used to describe known vulnerabilities in software. Common Weakness Enumeration (CWE) – A structured language (XML) schema used to describe known weakness patterns in software that can result in vulnerabilities. Cryptographically random – A random number that is derived from a nondeterministic source, thus knowing one random number provides no insight into the next. CWE/SANS Top 25 Most Dangerous Software Errors – This list, maintained by SANS and MITRE, includes the 25 most dangerous programming errors categorized in three distinct areas. Dead code - code that while it may be executed, the results that it obtains are never used elsewhere in the program. Deprecated function – A function that have been superseded and/or are no longer fit for use. DevOps – A combination of development and operations, in other words, a blending of tasks performed by a company’s application development and systems operations teams. Evolutionary model – An iterative model designed to enable the construction of increasingly complex versions of a project.

4 Key Terms (2 of 2) Fuzzing Grey-box testing Immutable system
Least privilege Requirements phase Secure development lifecycle (SDL) model Spiral model SQL injection Testing phase Use case Waterfall model White-box testing Zero-day Fuzzing – The use of large quantities of data to test an interface against security vulnerabilities. (Also known as fuzz testing.). Grey-box testing – A form of testing where the tester has limited or partial knowledge of the inner working of a system. Immutable system - system that, once deployed, is never modified, patched, or upgraded. If a patch or update is required, the system is merely replaced with a new updated one. Least privilege – A security principle in which a user is provided with the minimum set of rights and privileges that he or she needs to perform required functions. The goal is to limit the potential damage that any user can cause. Requirements phase - This phase should define the specific security requirements if there is any expectation of them being designed into the project. Secure development lifecycle (SDL) model – A process model to include security function consideration as part of the build process of software in an effort to reduce attack surfaces and vulnerabilities. Spiral model – This is a design model that has steps in phases that execute in a spiral fashion, repeating at different levels with each revolution of the model. SQL injection – An attack against a SQL engine parser designed to perform unauthorized database activities. Testing phase – This is the final phase in the process where testing is done before the product is given to end users. Use case – This is a review process use to understand the requirements of a piece of software. Waterfall model – This model is characterized by a multistep process in which steps follow each other in a linear, one-way fashion, like water over a waterfall. White-box testing – A testing methodology where the test team has access to the design and coding elements. Zero-day – A name given to a vulnerability whose existence is known, but not to the developer of the software, hence it can be exploited before patches are developed and released.

5 The Software Engineering Process
There are several major categories of software engineering processes. The waterfall model, the spiral model, and the evolutionary model are major examples. Integrating security in the software development lifecycle process requires: Inclusion of security requirements and measures in the specific process model being used Use of secure coding methods to prevent opportunities to introduce security failures into the software’s design Software does not build itself. This is good news for software designers, analysts, programmers, and the like, for the complexity of designing and building software enables them to engage in well-paying careers. To achieve continued success in this difficult work environment, software engineering processes have been developed. Rather than just sitting down and starting to write code at the onset of a project, software engineers use a complete development process. There are several major categories of software engineering processes. The waterfall model, the spiral model, and the evolutionary model are major examples. Within each of these major categories, there are numerous variations, and each group then personalizes the process to their project requirements and team capabilities. Traditionally, security is an add-on item that is incorporated into a system after the functional requirements have been met. It is not an integral part of the software development lifecycle process. This places it at odds with both functional and lifecycle process requirements. The resolution to all of these issues is relatively simple: incorporate security into the process model and build it into the product along with each functional requirement. The challenge is in how to accomplish this goal. There are two separate and required elements needed to achieve this objective. First, the inclusion of security requirements and measures in the specific process model being used. Second, the use of secure coding methods to prevent opportunities to introduce security failures into the software’s design.

6 Process Models (1 of 2) The waterfall model is characterized by a multistep process in which steps follow each other in a linear, one-way fashion, like water over a waterfall. The spiral model has steps in phases that execute in a spiral fashion, repeating at different levels with each revolution of the model. The agile model is characterized by iterative development, where requirements and solutions evolve through an ongoing collaboration between self-organizing cross-functional teams.

7 Process Models (2 of 2) The evolutionary model is an iterative model designed to enable the construction of increasingly complex versions of a project. From a secure coding perspective, a secure development lifecycle (SDL) model is essential to success. Four primary items of interest in software creation are: Requirements, design, coding, and testing phases

8 Secure Development Lifecycle (1 of 2)
Secure coding is creating code that does what it is supposed to do, and only what it is supposed to do. Firms are now recognizing need to include secure coding principles into the development process. Microsoft has Security Development Lifecycle (SDL). The Software Assurance Forum for Excellence in Code (SAFECode) is an organization formed by some of the leading software development firms with the objective of advancing software assurance through better development methods. There may be as many different software engineering methods as there are software engineering groups. But an analysis of these methods indicates that most share common elements from which an understanding of a universal methodology can be obtained. For decades, secure coding—that is, creating code that does what it is supposed to do, and only what it is supposed to do—has not been high on the radar for most organizations. The past decade of explosive connectivity and the rise of malware and hackers have raised awareness of this issue significantly. A recent alliance of several major software firms concerned with secure coding principles revealed several interesting patterns. First, they were all attacking the problem using different methodologies, but yet in surprisingly similar fashions. Second, they found a series of principles that appears to be related to success in this endeavor. First and foremost, recognition of the need to include secure coding principles into the development process is a common element among all firms. Microsoft has been very open and vocal about its implementation of its Security Development Lifecycle (SDL) and has published significant volumes of information surrounding its genesis and evolution ( The Software Assurance Forum for Excellence in Code (SAFECode) is an organization formed by some of the leading software development firms with the objective of advancing software assurance through better development methods. SAFECode ( members include EMC, Microsoft, and Intel. An examination of SAFECode members’ processes reveals an assertion that secure coding must be treated as an issue that exists throughout the development process and cannot be effectively treated at a few checkpoints with checklists. Regardless of the software development process used, the first step down the path to secure coding is to infuse the process with secure coding principles.

9 Secure Development Lifecycle (2 of 2)
Two important tools have come from the secure coding revolution: Attack surface area minimization is a strategy to reduce the places where code can be attacked. Threat modeling is the process of analyzing threats and their potential effects on software in a very finely detailed fashion. The output of the threat model process is a compilation of threats and how they interact with the software. Threat Modeling and Attack Surface Area Minimization Two important tools have come from the secure coding revolution: threat modeling and attack surface area minimization. Attack surface area minimization is a strategy to reduce the places where code can be attacked. The second major design effort is one built around threat modeling, the process of analyzing threats and their potential effects on software in a very finely detailed fashion. The output of the threat model process is a compilation of threats and how they interact with the software. This information is communicated across the design and coding team, so that potential weaknesses can be mitigated before the software is released.

10 Requirements Phase (1 of 2)
The requirements phase should define the specific security requirements if there is any expectation of them being designed into the project. The process is all about completing the requirements. The objective of the secure coding process is to properly implement this and all other requirements, so that the resultant software performs as desired and only as desired. Requirements process is a key component of security in software development. Requirements Phase The requirements phase should define the specific security requirements if there is any expectation of them being designed into the project. Regardless of the methodology employed, the process is all about completing the requirements. Secure coding does not refer to adding security functionality into a piece of software. Security functionality is a standalone requirement. The objective of the secure coding process is to properly implement this and all other requirements, so that the resultant software performs as desired and only as desired. The requirements process is a key component of security in software development. Security-related items enumerated during the requirements process are visible throughout the rest of the software development process. They can be architected into the systems and subsystems, addressed during coding, and tested. For the subsequent steps to be effective, the security requirements need to be both specific and positive. Requirements such as “make secure code” or “no insecure code” are nonspecific and not helpful in the overall process. Specific requirements such as “prevent unhandled buffer overflows and unhandled input exceptions” can be specifically coded for in each piece of code.

11 Requirements Phase (2 of 2)
The cost of adding security at a later time rises exponentially. The development of both functional and nonfunctional security requirements occurs in tandem with other requirements through the: Development of use cases, analysis of customer inputs, implementation of company policies, and compliance with industry best practices One output of the requirements phase is a security document. During the requirements activity, it is essential that the project/program manager and any business leaders who set schedules and allocate resources are aware of the need and requirements of the secure development process. The cost of adding security at a later time rises exponentially, with the most expensive form being the common release-and-patch process used by many firms. The development of both functional and nonfunctional security requirements occurs in tandem with other requirements through the development of use cases, analysis of customer inputs, implementation of company policies, and compliance with industry best practices. Depending on the nature of a particular module, special attention may be focused on sensitive issues such as personally identifiable information (PII), sensitive data, or intellectual property data. One of the outputs of the requirements phase is a security document that helps guide the remaining aspects of the development process, ensuring that secure code requirements are being addressed. These requirements can be infused into design, coding, and testing, ensuring they are addressed throughout the development process.

12 Design Phase Designing a software project is a multifaceted process.
Minimizing attack surface area is a concept that tends to run counter to the way software has been designed—most designs come as a result of incremental accumulation, adding features and functions without regard to maintainability. Design Phase Coding without designing first is like building a house without using plans. This might work fine on small projects, but as the scope grows, so do complexity and the opportunity for failure. Designing a software project is a multifaceted process. Just as there are many ways to build a house, there are many ways to build a program. Design is a process involving trade-offs and choices, and the criteria used during the design decisions can have lasting impacts on program construction. There are two secure coding principles that can be applied during the design phase that can have a large influence on the code quality. The first of these is the concept of minimizing attack surface area. Reducing the avenues of attack available to a hacker can have obvious benefits. Minimizing attack surface area is a concept that tends to run counter to the way software has been designed—most designs come as a result of incremental accumulation, adding features and functions without regard to maintainability.

13 Coding Phase (1 of 4) The point at which the design is implemented is the coding step in the software development process. The act of instantiating an idea into code is a point where an error can enter the process. Examples include: The failure to include desired functionality The inclusion of undesired behavior in the code Testing for the first type of error is relatively easy if the requirements are enumerated in a previous phase of the process.

14 Coding Phase (2 of 4) Testing for the inclusion of undesired behavior is significantly more difficult. Enumerations of known software weaknesses and vulnerabilities have been compiled and published as the Common Weakness Enumeration (CWE) and Common Vulnerabilities and Exposures (CVE) by the MITRE Corporation. These enumerations have enabled significant advancement in the development of methods to reduce code vulnerabilities. Coding Phase (continued) Testing for the inclusion of undesired behavior is significantly more difficult. Testing for an unknown is a virtually impossible task. What makes this possible at all is the concept of testing for categories of previously determined errors. Several classes of common errors have been observed. Enumerations of known software weaknesses and vulnerabilities have been compiled and published as the Common Weakness Enumeration (CWE) and Common Vulnerabilities and Exposures (CVE) by the MITRE Corporation, a government-funded research group ( These enumerations have enabled significant advancement in the development of methods to reduce code vulnerabilities. The CVE and CWE are vendor- and language-neutral methods of describing errors. These enumerations allow a common vocabulary for communication about weaknesses and vulnerabilities. This common vocabulary has also led to the development of automated tools to manage the tracking of these issues. There are many common coding errors, but some of the primary and most damaging are least privilege violations and cryptographic failures. Language-specific failures are another common source of vulnerabilities. There are several ways to go about searching for coding errors that lead to vulnerabilities in software. One method is by manual code inspection. Developers can be trained to “not make mistakes,” but this approach has not proven successful. This has led to the development of a class of tools designed to analyze code for potential defects. Static code-analysis tools are a type of tool that can be used to analyze software for coding errors that can lead to known types of vulnerabilities and weaknesses. Sophisticated static code analyzers can examine codebases to find function calls of unsafe libraries, potential buffer-overflow conditions, and numerous other conditions. Currently, the CWE describes more than 750 different weaknesses, far too many for developer memory and direct knowledge. In light of this, and due to the fact that some weaknesses are more prevalent than others, MITRE has collaborated with SANS to develop the CWE/SANS Top 25 Most Dangerous Software Errors list. One of the ideas behind the Top 25 list is that it can be updated periodically as the threat landscape changes. Explore the current listing at here are two main enumerations of common software errors: the Top 25 list maintained by MITRE and the OWASP Top Ten list for web applications. Depending on the type of application being evaluated, these lists provide a solid starting point for security analysis of known error types. MITRE is the repository of the industry standard list for standard programs, and OWASP is for web applications. As the causes of common errors do not change quickly, these lists are not updated every year.

15 Coding Phase (3 of 4) Primary and most damaging coding errors are least privilege violations and cryptographic failures. Language-specific failures are another common source of vulnerabilities. Ways to go about searching for coding errors that lead to vulnerabilities in software include: Manual code inspection Static code-analysis tools designed to analyze code for potential defects Coding Phase (continued) There are many common coding errors, but some of the primary and most damaging are least privilege violations and cryptographic failures. Language-specific failures are another common source of vulnerabilities. There are several ways to go about searching for coding errors that lead to vulnerabilities in software. One method is by manual code inspection. Developers can be trained to “not make mistakes,” but this approach has not proven successful. This has led to the development of a class of tools designed to analyze code for potential defects. Static code-analysis tools are a type of tool that can be used to analyze software for coding errors that can lead to known types of vulnerabilities and weaknesses. Sophisticated static code analyzers can examine codebases to find function calls of unsafe libraries, potential buffer-overflow conditions, and numerous other conditions. Currently, the CWE describes more than 750 different weaknesses, far too many for developer memory and direct knowledge. In light of this, and due to the fact that some weaknesses are more prevalent than others, MITRE has collaborated with SANS to develop the CWE/SANS Top 25 Most Dangerous Software Errors list. One of the ideas behind the Top 25 list is that it can be updated periodically as the threat landscape changes. Explore the current listing at There are two main enumerations of common software errors: the Top 25 list maintained by MITRE and the OWASP Top Ten list for web applications. Depending on the type of application being evaluated, these lists provide a solid starting point for security analysis of known error types. MITRE is the repository of the industry standard list for standard programs, and OWASP is for web applications. As the causes of common errors do not change quickly, these lists are not updated every year.

16 Coding Phase (4 of 4) Currently, the CWE describes more than 750 different weaknesses. MITRE collaborated with SANS to develop CWE/SANS Top 25 Most Dangerous Software Errors list. One of the ideas behind the Top 25 list is that it can be updated periodically as the threat landscape changes. Two main enumerations of common software errors are: Top 25 list maintained by MITRE OWASP Top Ten list for web applications. Coding Phase (continued) Currently, the CWE describes more than 750 different weaknesses, far too many for developer memory and direct knowledge. In light of this, and due to the fact that some weaknesses are more prevalent than others, MITRE has collaborated with SANS to develop the CWE/SANS Top 25 Most Dangerous Software Errors list. One of the ideas behind the Top 25 list is that it can be updated periodically as the threat landscape changes. Explore the current listing at There are two main enumerations of common software errors: the Top 25 list maintained by MITRE and the OWASP Top Ten list for web applications. Depending on the type of application being evaluated, these lists provide a solid starting point for security analysis of known error types. MITRE is the repository of the industry standard list for standard programs, and OWASP is for web applications. As the causes of common errors do not change quickly, these lists are not updated every year.

17 Least Privilege (1 of 2) Least privilege requires that the developer understand what privileges are needed specifically for an application to execute and access all its necessary resources. The key principle is to plan and understand the nature of the software’s interaction with the operating system and system resources. Determine what needs to be accessed and what the appropriate level of permission is, then use that level in design and implementation. One of the central paradigms of security is the notion of running a process with the least required privilege. Least privilege requires that the developer understand what privileges are needed specifically for an application to execute and access all its necessary resources. Obviously, from a developer point of view, it would be easier to use administrative level permission for all tasks, which removes access controls from the equation, but this also removes the very protections that access-level controls are designed to provide. The other end of the spectrum is software designed for operating systems without any built-in security, such as early versions of Windows and some mainframe OSs, where security comes in the form of an application package. When migrating these applications to platforms, the issue of access controls arises.

18 Least Privilege (2 of 2) The cost of failure to heed the principle of least privilege can be twofold. First, you have expensive, time-consuming access-violation errors that are hard to track down and correct. The second problem is when an exploit is found that allows some other program to use portions of your code in an unauthorized fashion.

19 Cryptographic Failures (1 of 2)
Cryptographic failures come from several common causes. One typical mistake is choosing to develop your own cryptographic algorithm. Deciding to use a trusted algorithm is a proper start, but there still are errors that can occur in instantiating the algorithm. Generation of a real random number is not a trivial task as generating a pure, non-reproducible random number is a challenge. Hailed as a solution for all problems, cryptography has as much chance of being the ultimate cure-all as did the tonics sold by traveling salesmen of a different era. There is no such thing as a universal solution, yet there are some very versatile tools that provide a wide range of protections. Cryptography falls into this “very useful tool” category. Proper use of cryptography can provide a wealth of programmatic functionality, from authentication and confidentiality to integrity and nonrepudiation. These are valuable tools, and many programs rely on proper cryptographic function for important functionality. The need for this functionality in an application tempts programmers to roll their own cryptographic functions. This is a task fraught with opportunity for catastrophic error. Cryptographic errors come from several common causes. One typical mistake is choosing to develop your own cryptographic algorithm. Development of a secure cryptographic algorithm is far from an easy task, and even when done by experts, weaknesses can occur that make them unusable. Cryptographic algorithms become trusted after years of scrutiny and attacks, and any new algorithms would take years to join the trusted set. If you instead decide to rest on secrecy, be warned that secret or proprietary algorithms have never provided the desired level of protection. One of the axioms of cryptography is that there is no security through obscurity. Deciding to use a trusted algorithm is a proper start, but there still are several major errors that can occur. The first is an error in instantiating the algorithm. An easy way to avoid this type of error is to use a library function that has already been properly tested. Sources of these library functions abound, and they provide an economical solution to this functionality’s needs. Once you have an algorithm, and have chosen a particular instantiation, the next item needed is the random number to generate a random key. Cryptographic functions use an algorithm and a key, the latter being a digital number. The generation of a real random number is not a trivial task. Computers are machines that are renowned for reproducing the same output when given the same input, so generating a pure, non-reproducible random number is a challenge. There are functions for producing random numbers built into the libraries of most programming languages, but these are pseudorandom number generators, and although the distribution of output numbers appears random, it generates a reproducible sequence. Given the same input, a second run of the function will produce the same sequence of “random” numbers.

20 Cryptographic Failures (2 of 2)
Determining the seed and random sequence and using this knowledge to “break” a cryptographic function has been used to bypass the security. This method was used to subvert an early version of Netscape’s SSL implementation. Using a number that is cryptographically random—suitable for an encryption function—resolves the problem. Storing private keys in areas where they can be recovered by an unauthorized person is the next worry. Determining the seed and random sequence and using this knowledge to “break” a cryptographic function has been used more than once to bypass the security. This method was used to subvert an early version of Netscape’s SSL implementation. Using a number that is cryptographically random—suitable for an encryption function—resolves this problem, and again the use of trusted library functions designed and tested for generating such numbers is the proper methodology. Now you have a good algorithm and a good random number—so where can you go wrong? Well, storing private keys in areas where they can be recovered by an unauthorized person is the next worry. Poor key management has failed many a cryptographic implementation.

21 Language-Specific Failures
Many library calls and functions were developed without regard to secure coding implications. Led to issues related to specific library functions Developing and maintaining a series of deprecated functions and prohibiting their use in new code, while removing them from old code when possible, is a proven path toward more secure code. Banned functions are easily handled via automated code reviews during the check-in process. Language-Specific Failures Modern programming languages are built around libraries that permit reuse and speed the development process. The development of many library calls and functions was done without regard to secure coding implications, and this has led to issues related to specific library functions. As mentioned previously, strcpy() has had its fair share of involvement in buffer overflows and should be avoided. Developing and maintaining a series of deprecated functions and prohibiting their use in new code, while removing them from old code when possible, is a proven path toward more secure code. Banned functions are easily handled via automated code reviews during the check-in process. The challenge is in garnering the developer awareness as to the potential dangers and the value of safer coding practices.

22 Testing Phase (1 of 3) If the requirements phase marks the beginning of the generation of security in code, then the testing phase marks the other boundary. Employing use cases to compare program responses to known inputs and then comparing the output to the desired output is a proven testing method. Misuse cases can be formulated to verify that vulnerabilities cannot be exploited. Fuzz testing (also known as fuzzing) uses random inputs to check for exploitable buffer overflows. Testing Phase If the requirements phase marks the beginning of the generation of security in code, then the testing phase marks the other boundary. Although there are additional functions after testing, no one wants a user to validate errors in code. And errors discovered after the code has shipped are the most expensive to fix, regardless of the severity. Employing use cases to compare program responses to known inputs and then comparing the output to the desired output is a proven method of testing software. The design of use cases to test specific functional requirements occurs based on the requirements determined in the requirements phase. Providing additional security-related use cases is the process-driven way of ensuring that security specifics are also tested. The testing phase is the last opportunity to determine that the software performs properly before the end user experiences problems. Errors found in testing are late in the development process, but at least they are still learned about internally, before the end customer suffers. Testing can occur at each level of development: module, subsystem, system, and completed application. The sooner errors are discovered and corrected, the lower the cost and the lesser the impact will be to project schedules. This makes testing an essential step in the process of developing good programs. Testing for security requires a much broader series of tests than functional testing does. Misuse cases can be formulated to verify that vulnerabilities cannot be exploited. Fuzz testing (also known as fuzzing) uses random inputs to check for exploitable buffer overflows. Code reviews by design and development teams are used to verify that security elements such as input and output validation are functional, as these are the best defenses against a wide range of attacks, including cross-site scripting and cross-site request forgeries. Code walkthroughs begin with design reviews, architecture examinations, unit testing, subsystem testing, and, ultimately, complete system testing.

23 Testing Phase (2 of 3) Testing includes:
White-box testing – test team has access to the design and coding elements Black-box testing – team does not have access to coding elements Grey-box testing – test team has more information than in black-box testing but not as much as in white-box testing Final code can be subjected to penetration tests. Designed specifically to test configuration, security controls, and common defenses such as input and output validation and error handling Testing Phase Testing for security requires a much broader series of tests than functional testing does. Misuse cases can be formulated to verify that vulnerabilities cannot be exploited. Fuzz testing (also known as fuzzing) uses random inputs to check for exploitable buffer overflows. Code reviews by design and development teams are used to verify that security elements such as input and output validation are functional, as these are the best defenses against a wide range of attacks, including cross-site scripting and cross-site request forgeries. Code walkthroughs begin with design reviews, architecture examinations, unit testing, subsystem testing, and, ultimately, complete system testing. Final code can be subjected to penetration tests, designed specifically to test configuration, security controls, and common defenses such as input and output validation and error handling. One of the most powerful tools that can be used in testing is fuzzing, the systematic application of a series of malformed inputs to test how the program responds. Fuzzing has been used by hackers for years to find potentially exploitable buffer overflows, without any specific knowledge of the coding. A tester can use a fuzzing framework to automate numerous input sequences. In examining whether a function can fall prey to a buffer overflow, numerous inputs can be run, testing lengths and ultimate payload-delivery options. If a particular input string results in a crash that can be exploited, this input would then be examined in detail. Fuzzing is new to the development scene but is rapidly maturing and will soon be on nearly equal footing with other automated code-checking tools.

24 Testing Phase (3 of 3) One of the most powerful tools that can be used in testing is fuzzing, the systematic application of a series of malformed inputs to test how the program responds. Fuzzing has been used by hackers for years to find potentially exploitable buffer overflows, without any specific knowledge of the coding. A tester can use a fuzzing framework to automate numerous input sequences. Testing Phase (continued) One of the most powerful tools that can be used in testing is fuzzing, the systematic application of a series of malformed inputs to test how the program responds. Fuzzing has been used by hackers for years to find potentially exploitable buffer overflows, without any specific knowledge of the coding. A tester can use a fuzzing framework to automate numerous input sequences. In examining whether a function can fall prey to a buffer overflow, numerous inputs can be run, testing lengths and ultimate payload-delivery options. If a particular input string results in a crash that can be exploited, this input would then be examined in detail. Fuzzing is new to the development scene but is rapidly maturing and will soon be on nearly equal footing with other automated code-checking tools.

25 Secure Coding Concepts
There are numerous individual elements in the secure development lifecycle (SDL) that can assist a team in developing secure code. Correct SDL processes, such as input validation, proper error and exception handling, and cross-site scripting and cross-site request forgery mitigations, can improve the security of code. Process elements such as security testing, fuzzing, and patch management also help to ensure applications meet a desired risk profile. Application security begins with code that is secure and free of vulnerabilities. Unfortunately, all code has weaknesses and vulnerabilities, so instantiating the code in a manner that has effective defenses preventing the exploitation of vulnerabilities can maintain a desired level of security. Proper handling of configurations, errors and exceptions, and inputs can assist in the creation of a secure application. Testing of the application throughout the system lifecycle can be used to determine the actual security risk profile of a system.

26 Error and Exception Handling
During an exception, it is common practice to record/report the condition. The challenge is in where this information is captured. Errors associated with SQL statements can disclose data structures and data elements. Remote procedure call (RPC) errors can give up sensitive information. Programmatic errors can give up line numbers that an exception occurred on, the method that was invoked, and information such as stack elements. Every application will encounter errors and exceptions, and these need to be handled in a secure manner. One attack methodology includes forcing errors to move an application from normal operation to exception handling. During an exception, it is common practice to record/report the condition, including supporting information such as the data that resulted in the error. This information can be invaluable in diagnosing the cause of the error condition. The challenge is in where this information is captured. The best method is to capture it in a log file, where it can be secured by an ACL. The worst case is when it is echoed to the user. Echoing error condition details to users can provide valuable information to attackers when they cause errors on purpose. Improper exception handling can lead to a wide range of disclosures. Errors associated with SQL statements can disclose data structures and data elements. Remote procedure call (RPC) errors can give up sensitive information such as filenames, paths, and server names. Programmatic errors can give up line numbers that an exception occurred on, the method that was invoked, and information such as stack elements.

27 Input and Output Validation (1 of 2)
Users have the ability to manipulate inputs. It is up to the programmer to appropriately handle the input to prevent malicious entries from having an effect. Input validation is especially well suited for the following vulnerabilities: Buffer overflow, reliance on untrusted inputs in a security decision, cross-site scripting, cross-site request forgery, path traversal, and incorrect calculation of buffer size With the move to web-based applications, the errors have shifted from buffer overflows to input-handling issues. Users have the ability to manipulate input, so it is up to the developer to handle the input appropriately to prevent malicious entries from having an effect. Buffer overflows could be considered a class of improper input, but newer attacks include canonicalization attacks and arithmetic attacks. Probably the most important defensive mechanism that can be employed is input validation. Considering all inputs to be hostile until properly validated can mitigate many attacks based on common vulnerabilities. This is a challenge, as the validation efforts need to occur after all parsers have completed manipulating input streams, a common function in web-based applications using Unicode and other international character sets. Input validation is especially well suited for the following vulnerabilities: buffer overflow, reliance on untrusted inputs in a security decision, cross-site scripting, cross-site request forgery, path traversal, and incorrect calculation of buffer size. Input validation may seem suitable for various injection attacks, but given the complexity of the input and the ramifications from legal but improper input streams, this method falls short for most injection attacks. What can work is a form of recognition and whitelisting approach, where the input is validated and then parsed into a standard structure that is then executed. This restricts the attack surface to not only legal inputs but also expected inputs.

28 Input and Output Validation (2 of 2)
Canonicalization is the process by which application programs manipulate strings to a base form, creating a foundational representation of the input. Canonicalization errors arise from the fact that inputs to a web application may be processed by multiple applications, such as the web server, application server, and database server, each with its own parsers to resolve appropriate canonicalization issues. The first line of defense is to write solid code. A second line of defense is proper string handling. In today’s computing environment, a wide range of character sets is used. Unicode allows multilanguage support. Character codesets allow multilanguage capability. Various encoding schemes, such as hex encoding, are supported to allow diverse inputs. The net result of all these input methods is that there are numerous ways to create the same input to a program. Canonicalization is the process by which application programs manipulate strings to a base form, creating a foundational representation of the input. Canonicalization errors arise from the fact that inputs to a web application may be processed by multiple applications, such as the web server, application server, and database server, each with its own parsers to resolve appropriate canonicalization issues. Where this is an issue relates to the form of the input string at the time of error checking. If the error-checking routine occurs prior to resolution to canonical form, then issues may be missed. The string representing /../, used in directory traversal attacks, can be obscured by encoding and hence missed by a character string match before an application parser manipulates it to canonical form. The first line of defense is to write solid code. Regardless of the language used, or the source of outside input, prudent programming practice is to treat all input from outside a function as hostile. Validate all inputs as if they were hostile and an attempt to force a buffer overflow. Accept the notion that although during development everyone may be on the same team, be conscientious, and be compliant with design rules, future maintainers may not be as robust. A second, and equally important, line of defense is proper string handling. String handling is a common event in programs, and string-handling functions are the source of a large number of known buffer-overflow vulnerabilities. Using strncpy() in place of strcpy() is a possible method of improving security because strncpy() requires an input length for the number of characters to be copied. This simple function call replacement can ultimately fail, however, because Unicode and other encoding methods can make character counts meaningless. To resolve this issue requires new library calls, and much closer attention to how input strings, and subsequently output strings, can be abused. Proper use of functions to achieve program objectives is essential to prevent unintended effects such as buffer overflows. Use of the gets() function can probably never be totally safe since it reads from the stdin stream until a linefeed or carriage return. In most cases, there is no way to predetermine whether the input is going to overflow the buffer. A better solution is to use a C++ stream object or the fgets() function. The function fgets() requires an input buffer length, and hence avoids the overflow. Simply replace { char buf[512]; gets( buf );  if buf is > 512 bytes, overflow will occur /* ... The rest of your code ... */ } with { char buf[512]; fgets( buf, sizeof(buf), stdin ); /* ... the rest of your code ... */ }

29 Normalization (1 of 3) Normalization is an initial step in input validation process. It is the process of creating the canonical form, or simplest form, of a string before processing. Strings can be encoded using Unicode and other encoding methods. This makes byte-by-byte comparisons meaningless when trying to screen user input of strings.

30 Normalization (2 of 3) Developers should always normalize their inputs prior to validation steps to remove Unicode and other encoding issues. Proper string handling. String handling is a common event in programs. String-handling functions are the source of a large number of known buffer-overflow vulnerabilities. Proper use of functions to achieve program objectives is essential to prevent unintended effects such as buffer overflows.

31 Normalization (3 of 3) Output validation is just as important in many cases as input validation. If querying a database for a username and password match, the expected forms of the output of the match function should be either one match or none. If using record count to indicate the level of match, which is a common practice, then a value other than zero or one would be an error. Defensive coding using output validation would not act on values greater than one, as these are clearly an error and should be treated as a failure. Output validation is just as important in many cases as input validation. If querying a database for a username and password match, the expected forms of the output of the match function should be either one match or none. If using record count to indicate the level of match, which is a common practice, then a value other than 0 or 1 would be an error. Defensive coding using output validation would not act on values >1, as these are clearly an error and should be treated as a failure.

32 Bug Tracking Bug tracking is a foundational element in secure development. All bugs are enumerated, classified, and tracked. If the classification of a bug exceeds a set level, then it must be resolved before the code advances to the next level of development. Bugs are classified based on the risk the vulnerability exposes. Microsoft uses four levels: critical, important, moderate, and low. Microsoft uses four levels: Critical – A security vulnerability having the highest potential for damage Important – A security vulnerability having significant potential for damage, but less than Critical Moderate – A security vulnerability having moderate potential for damage, but less than Important Low – A security vulnerability having low potential for damage Examples of Critical vulnerabilities include those that without warning to the user can result in remote exploit involving elevation of privilege. Critical is really reserved for the most important risks. As an example of the distinction between Critical and Important, a vulnerability that would lead to a machine failure requiring reinstallation of software would only score Important. The key difference is that the user would know of this penetration and risk, whereas for a Critical vulnerability, the user may never know that it occurred. The tracking of errors serves several purposes. First, from a management perspective, what is measured is managed, both by management and by those involved. Over time, fewer errors will occur if the workforce knows they are being tracked, taken seriously, and represent an issue with the product. Second, since not all errors are immediately correctable, this enables future correction when a module is rewritten. Zero defects in code is like zero defects in quality: not an achievable objective. But this does not mean that constant improvement of the process cannot dramatically reduce the error rates. Evidence from firms involved in SAFECode support this, as they are reaping the benefits of lower error rates and reduced development costs from lower levels of corrective work.

33 Application Attacks Application-level attacks take advantage of several facts associated with computer applications. First, most applications are large programs written by groups of programmers, and by their nature have errors in design and coding that create vulnerabilities. Second, even when vulnerabilities are discovered and patched by software vendors, end users are slow to apply patches, as evidenced by the SQL Slammer incident in January 2003. Attacks against a system can occur at the network level, at the operating system level, at the application level, or at the user level (social engineering). Early attack patterns were against the network, but most of today’s attacks are aimed at the applications, primarily because that is where the objective of most attacks resides—in the infamous words of bank robber Willie Sutton, “because that’s where the money is.” In fact, many of today’s attacks on systems use combinations of vulnerabilities in networks, operating systems, and applications, all means to an end to obtain the desired objective of an attack, which is usually some form of data. Application-level attacks take advantage of several facts associated with computer applications. First, most applications are large programs written by groups of programmers, and by their nature have errors in design and coding that create vulnerabilities. For a list of typical vulnerabilities, see the Common Vulnerabilities and Exposures (CVE) list maintained by MITRE ( Second, even when vulnerabilities are discovered and patched by software vendors, end users are slow to apply patches, as evidenced by the SQL Slammer incident in January The vulnerability exploited was a buffer overflow, and the vendor supplied a patch six months prior to the outbreak, yet the worm still spread quickly due to the multitude of unpatched systems.

34 Cross-Site Scripting (1 of 3)
A cross-site scripting attack is a code injection attack in which an attacker sends code in response to an input request. This code is then rendered by the web server, resulting in the execution of the code by the web server. They take advantage of a few common elements. The failure to perform complete input validation The nature of web-based systems to dynamically self-create output Cross-site scripting (XSS) is one of the most common web attack methodologies. A cross-site scripting attack is a code injection attack in which an attacker sends code in response to an input request. This code is then rendered by the web server, resulting in the execution of the code by the web server. Cross-site scripting attacks take advantage of a few common elements in web-based systems. First is the common failure to perform complete input validation. XSS sends script in response to an input request, even when script is not the expected or authorized input type. Second is the nature of web-based systems to dynamically self-create output. Web-based systems are frequently collections of images, text, scripts, and more, which are presented by a web server to a browser that interprets and renders. XSS attacks can exploit the dynamically self-created output by executing a script in the client browser that receives the altered output.

35 Cross-Site Scripting (2 of 3)
The cause of the vulnerability is weak user input validation. There are several different types of XSS attacks: Nonpersistent XSS attack Persistent XSS attack DOM-based XSS attack Cross-site scripting attacks can result in a wide range of consequences. The cause of the vulnerability is weak user input validation. If input is not validated properly, an attacker can include a script in their input and have it rendered as part of the web process. There are several different types of XSS attacks, which are distinguished by the effect of the script: ■ Nonpersistent XSS attack The injected script is not persisted or stored, but rather is immediately executed and passed back via the web server. ■ Persistent XSS attack The script is permanently stored on the web server or some back-end storage. This allows the script to be used against others who log into the system. ■ DOM-based XSS attack The script is executed in the browser via the Document Object Model (DOM) process as opposed to the web server. Cross-site scripting attacks can result in a wide range of consequences, and in some cases, the list can be anything that a clever scripter can devise. Common uses that have been seen in the wild include the following: ■ Theft of authentication information from a web application ■ Session hijacking ■ Deploying hostile content ■ Changing user settings, including future users ■ Impersonating a user ■ Phishing or stealing sensitive information

36 Cross-Site Scripting (3 of 3)
Controls to defend against XSS attacks include: Using anti-XSS libraries to strip scripts from the input sequences Limiting the types of uploads and screening the size of uploads, whitelisting inputs Well-designed anti-XSS input library functions have proven to be the best defense. Cross-site scripting vulnerabilities are easily tested for and should be a part of the test plan for every application. Controls to defend against XSS attacks include the use of anti-XSS libraries to strip scripts from the input sequences. Various other ways to mitigate XSS attacks include limiting types of uploads and screening the size of uploads, whitelisting inputs, and so on, but attempting to remove scripts from inputs can be a tricky task. Well-designed anti-XSS input library functions have proven to be the best defense. Cross-site scripting vulnerabilities are easily tested for and should be a part of the test plan for every application. Testing a variety of encoded and unencoded inputs for scripting vulnerability is an essential test element.

37 Injections (1 of 5) Use of input to a function without validation has already been shown to be risky behavior. Another issue with unvalidated input is the case of code injection. Rather than the input being appropriate for the function, this code injection changes the function in an unintended way. An SQL injection attack is a form of code injection aimed at any Structured Query Language (SQL)–based database, regardless of vendor.

38 Injections (2 of 5) The primary method of defense against this type of vulnerability is to validate all inputs. You need to validate inputs for content. Passing the user input through an HTMLencode function before use can prevent such attacks. Good programming practice goes a long way toward preventing these types of vulnerabilities. But rather than validating toward just length, you need to validate inputs for content. Imagine a web page that asks for user input, and then uses that input in the building of a subsequent page. Now imagine that the user puts the text for a JavaScript function in the middle of their input sequence, along with a call to the script. Now, the generated web page has an added JavaScript function that is called when displayed. Passing the user input through an HTMLencode function before use can prevent such attacks. Again, good programming practice goes a long way toward preventing these types of vulnerabilities. This places the burden not just on the programmers, but also on the process of training programmers, the software engineering process that reviews code, and the testing process to catch programming errors. This is much more than a single-person responsibility; everyone involved in the software development process needs to be aware of the types and causes of these errors, and safeguards need to be in place to prevent their propagation.

39 Injections (3 of 5) A SQL injection attack is a form of code injection aimed at any Structured Query Language (SQL)–based database, regardless of vendor. An example of this type of attack is where the function takes the user-provided inputs for username and password and substitutes them into a where clause of a SQL statement with the express purpose of changing the where clause into one that gives a false answer to the query.

40 Injections (4 of 5) LDAP-based systems are subject to injection attacks. When an application constructs an LDAP request based on user input, a failure to validate the input can lead to bad LDAP requests. Just as the SQL injection can be used to execute arbitrary commands in a database, the LDAP injection can do the same in a directory system. Something as simple as a wildcard character (*) in a search box can return results that would normally be beyond the scope of a query. Proper input validation is important. LDAP-based systems are also subject to injection attacks. When an application constructs an LDAP request based on user input, a failure to validate the input can lead to bad LDAP requests. Just as the SQL injection can be used to execute arbitrary commands in a database, the LDAP injection can do the same in a directory system. Something as simple as a wildcard character (*) in a search box can return results that would normally be beyond the scope of a query. Proper input validation is important before passing the request to an LDAP engine.

41 Injections (5 of 5) XML can be tampered with via injection as well.
XML injections can be used to manipulate an XML-based system. As XML is nearly ubiquitous in the web application world, this form of attack has a wide range of targets. Primary defense against injection attacks is input validation. Rather than validating toward just length, you need to validate inputs for content.

42 Directory Traversal/Command Injection
In a directory traversal attack, an attacker uses special inputs to circumvent the directory tree structure of the file system. Classified as input validation errors, these can be difficult to detect without doing code walkthroughs and specifically looking for them. Directory traversals can be masked by using encoding of input streams. If the security check is done before the string is decoded by the system parser, then recognition of the attack form may be impaired. A directory traversal attack is when an attacker uses special inputs to circumvent the directory tree structure of the file system. Adding encoded symbols for “../..” in an unvalidated input box can result in the parser resolving the encoding to the traversal code, bypassing many detection elements, and passing the input to the file system and resulting in the program executing commands in a different location than designed. When combined with a command injection, the input can result in execution of code in an unauthorized manner. Classified as input validation errors, these can be difficult to detect without doing code walkthroughs and specifically looking for them. This illustrates the usefulness of the Top 25 Most Dangerous Software Errors checklist during code reviews, as it would alert developers to this issue during development. Directory traversals can be masked by using encoding of input streams. If the security check is done before the string is decoded by the system parser, then recognition of the attack form may be impaired. There are many ways to represent a particular input form, the simplest of which is the canonical form (introduced earlier in the “A Rose Is a Rose Is a r%6fse” Tech Tip). Parsers are used to render the canonical form for the OS, but these embedded parsers may act after input validation, making it more difficult to detect certain attacks from just matching a string.

43 Buffer Overflow If there is one item that could be labeled as the “Most Wanted” in coding security, it would be the buffer overflow. Nearly half of all exploits of computer programs stem historically from some form of buffer overflow. The input buffer used to hold program input is overwritten with data that is larger than the buffer can hold. The root cause of this vulnerability is a mixture of two things: poor programming practice and programming language weaknesses. If there’s one item that could be labeled as the “Most Wanted” in coding security, it would be the buffer overflow. The CERT/CC at Carnegie Mellon University estimates that nearly half of all exploits of computer programs stem historically from some form of buffer overflow. Finding a vaccine to buffer overflows would stamp out half of these security-related incidents, by type, and probably 90 percent by volume. The Morris finger worm in 1988 was an exploit of an overflow, as were more recent big-name events such as Code Red and Slammer. The generic classification of buffer overflows includes many variants, such as static buffer overruns, indexing errors, format string bugs, Unicode and ANSI buffer size mismatches, and heap overruns. The concept behind these vulnerabilities is relatively simple. The input buffer that is used to hold program input is overwritten with data that is larger than the buffer can hold. The root cause of this vulnerability is a mixture of two things: poor programming practice and programming language weaknesses. For example, what would happen if a program that asks for a 7- to 10-character phone number instead receives a string of 150 characters? Many programs will provide some error checking to ensure that this will not cause a problem. Some programs, however, cannot handle this error, and the extra characters continue to fill memory, overwriting other portions of the program. This can result in a number of problems, including causing the program to abort or the system to crash. Under certain circumstances, the program can execute a command supplied by the attacker. Buffer overflows typically inherit the level of privilege enjoyed by the program being exploited. This is why programs that use root-level access are so dangerous when exploited with a buffer overflow, as the code that will execute does so at root-level access. Programming languages such as C were designed for space and performance constraints. Many functions in C, like gets(), are unsafe in that they will permit unsafe operations, such as unbounded string manipulation into fixed buffer locations. The C language also permits direct memory access via pointers, a functionality that provides a lot of programming power but carries with it the burden of proper safeguards being provided by the programmer. Buffer overflows are input validation attacks, designed to take advantage of input routines that do not validate the length of inputs. Surprisingly simple to resolve, all that is required is the validation of all input lengths prior to writing to memory. This can be done in a variety of manners, including the use of safe library functions for inputs. This is one of the vulnerabilities that has been shown to be solvable, and in fact the prevalence is declining substantially among major security-conscious software firms.

44 Integer Overflow An integer overflow is a programming error condition that occurs when a program attempts to store a numeric value, an integer, in a variable that is too small to hold it. In some cases, the value saturates the variable, assuming the maximum value for the defined type and no more. In other cases, especially with signed integers, it can roll over into a negative value, as the most significant bit is usually reserved for the sign of the number. This can create significant logic errors in a program. Integer overflows are easily tested. An integer overflow is a programming error condition that occurs when a program attempts to store a numeric value, an integer, in a variable that is too small to hold it. The results vary by language and numeric type. In some cases, the value saturates the variable, assuming the maximum value for the defined type and no more. In other cases, especially with signed integers, it can roll over into a negative value, as the most significant bit is usually reserved for the sign of the number. This can create significant logic errors in a program. Integer overflows are easily tested for, and static code analyzers can point out where they are likely to occur. Given this, there are not any good excuses for having these errors end up in production code.

45 Cross-Site Request Forgery
Cross-site request forgery (XSRF) attacks utilize unintended behaviors that are proper in defined use but are performed under circumstances outside the authorized use. It is performed against sites that have an authenticated user and exploits the site’s trust in a previous authentication event. Then, by tricking a user’s browser to send an HTTP request to the target site, the trust is exploited. There are many different mitigation techniques. Cross-site request forgery (XSRF) attacks utilize unintended behaviors that are proper in defined use but are performed under circumstances outside the authorized use. This is an example of a “confused deputy” problem, a class of problems where one entity mistakenly performs an action on behalf of another. An XSRF attack relies upon several conditions to be effective. It is performed against sites that have an authenticated user and exploits the site’s trust in a previous authentication event. Then, by tricking a user’s browser to send an HTTP request to the target site, the trust is exploited. Assume your bank allows you to log in and perform financial transactions, but does not validate the authentication for each subsequent transaction. If a user is logged in and has not closed their browser, then an action in another browser tab could send a hidden request to the bank, resulting in a transaction that appears to be authorized but in fact was not done by the user. There are many different mitigation techniques that can be employed, from limiting authentication times, to cookie expiration, to managing some specific elements of a web page like header checking. The strongest method is the use of random XSRF tokens in form submissions. Subsequent requests cannot work, as the token was not set in advance. Testing for XSRF takes a bit more planning than for other injection-type attacks, but this, too, can be accomplished as part of the design process.

46 Zero-Day Zero-day is a term used to define vulnerabilities that are newly discovered and not yet addressed by a patch. If a researcher or developer discovers a vulnerability but does not share the information, then this vulnerability can be exploited without a vendor’s ability to fix it. For all practical knowledge the issue is unknown, except to the person who found it. From the time of discovery until a fix or patch is made available, the vulnerability goes by the name zero-day, indicating that it has not been addressed yet. Zero-day is a term used to define vulnerabilities that are newly discovered and not yet addressed by a patch. Most vulnerabilities exist in an unknown state until discovered by a researcher or the developer. If a researcher or developer discovers a vulnerability but does not share the information, then this vulnerability can be exploited without a vendor’s ability to fix it, because for all practical knowledge the issue is unknown, except to the person who found it. From the time of discovery until a fix or patch is made available, the vulnerability goes by the name zero-day, indicating that it has not been addressed yet. The most frightening thing about zero-days is the unknown factor—their capability and effect on risk are unknown.

47 Attachments Attachments can also be used as an attack vector. If a user inputs a graphics file (for instance, a JPEG file), and that file is altered to contain executable code such as Java. When the image is rendered, the code is executed. This can enable a wide range of attacks.

48 Locally Shared Objects
Locally shared objects (LSOs) are pieces of data that are stored on a user’s machine to save information from an application, such as a game. Frequently these are cookies used by Adobe Flash, called Flash Cookies, and can store information such as user preferences. As these can be manipulated outside of the application, they can represent a security or privacy threat.

49 Client-Side Attacks The web browser has become the major application for users to engage resources across the Web. Web-based attacks are covered in detail in Chapter 17.

50 Arbitrary/Remote Code Execution
This attack involves an attacker preparing an input statement that changes the form or function of a prepared statement. A form of command injection, this attack can allow a user to insert arbitrary code and then remotely execute it on a system. This is a form of input validation failure, as users should not have the ability to change the way a program interacts with the host OS outside of a set of defined and approved methods. One of the risks involved in taking user input and using it to create a command to be executed on a system is arbitrary or remote code execution. This attack involves an attacker preparing an input statement that changes the form or function of a prepared statement. A form of command injection, this attack can allow a user to insert arbitrary code and then remotely execute it on a system. This is a form of input validation failure, as users should not have the ability to change the way a program interacts with the host OS outside of a set of defined and approved methods.

51 Open Vulnerability and Assessment Language
MITRE has developed a taxonomy of vulnerabilities, the Common Vulnerabilities and Exposures (CVE). The CVE led to efforts such as the development of the Open Vulnerability and Assessment Language (OVAL). OVAL comprises two main elements: An XML-based machine-readable language for describing vulnerabilities A repository The MITRE Corporation has done extensive research into software vulnerabilities. To enable collaboration between the many different parties involved in software development and maintenance, MITRE has developed a taxonomy of vulnerabilities, the Common Vulnerabilities and Exposures (CVE). This is just one of the many related enumerations that MITRE has developed, in an effort to make machine-readable data exchanges to facilitate system management across large enterprises. The CVE led to efforts such as the development of the Open Vulnerability and Assessment Language (OVAL). OVAL comprises two main elements: an XML-based machine-readable language for describing vulnerabilities, and a repository; see In addition to the CVE and OVAL efforts, MITRE has developed a wide range of enumerations and standards designed to ease the automation of security management at the lowest levels across an enterprise. Additional efforts include the following: ■ Common Attack Pattern Enumeration and Classification (CAPEC) ■ Extensible Configuration Checklist Description Format (XCCDF) ■ Security Content Automation Protocol (SCAP) ■ Common Configuration Enumeration (CCE) ■ Common Platform Enumeration (CPE) ■ Common Weakness Enumeration (CWE) ■ Common Event Expression (CEE) ■ Common Result Format (CRF) The Common Weakness Enumeration (CWE) is important for secure development in that it enumerates common patterns of development that lead to weakness and potential vulnerabilities. Additional information can be obtained from the MITRE Making Security Measurable web site,

52 Application Hardening
Application hardening works in the same fashion as system hardening. The first step is the removal of unnecessary components or options. The second step is the proper configuration of the system as it is implemented. The primary tools used to ensure a hardened system are a secure application configuration baseline and a patch management process. Application hardening works in the same fashion as system hardening (discussed in Chapter 14). The first step is the removal of unnecessary components or options. The second step is the proper configuration of the system as it is implemented. Every update or patch can lead to changes to these conditions, and they should be confirmed after every update. The primary tools used to ensure a hardened system are a secure application configuration baseline and a patch management process. When properly employed, these tools can lead to the most secure system.

53 Application Configuration Baseline
An application configuration baseline outlines the proper settings and configurations for an application or set of applications. These settings include many elements, from application settings to security settings. Protection of the settings is crucial, and the most common mechanisms used to protect them include access control lists and protected directories. The documentation of the desired settings is an important security document, assisting administrators in ensuring that proper configurations are maintained across updates. A baseline is the set of proper settings for a computer system. An application configuration baseline outlines the proper settings and configurations for an application or set of applications. These settings include many elements, from application settings to security settings. Protection of the settings is crucial, and the most common mechanisms used to protect them include access control lists and protected directories. The documentation of the desired settings is an important security document, assisting administrators in ensuring that proper configurations are maintained across updates.

54 Application Patch Management
Application patch management is a fundamental component of application and system hardening. The objective is to be running the most secure version of an application. Most updates and patches include fixing security issues and closing vulnerabilities. Current patching is a requirement of many compliance schemes as well. Some patches may result in production system problems. A formal system of patch management is needed to test and implement patches in a change-controlled manner.

55 NoSQL Databases vs. SQL Databases
Current programming trends include topics such as whether to use SQL databases or NoSQL databases. SQL databases are those that use Structured Query Language to manipulate items that are referenced in a relational manner in the form of tables. NoSQL refers to data stores that employ neither SQL nor relational table structures. Each system has its strengths and weaknesses, and both can be used for a wide range of data storage needs. Current programming trends include topics such as whether to use SQL databases or NoSQL databases. SQL databases are those that use Structured Query Language to manipulate items that are referenced in a relational manner in the form of tables. NoSQL refers to data stores that employ neither SQL nor relational table structures. Each system has its strengths and weaknesses, and both can be used for a wide range of data storage needs. SQL databases are by far the most common, with implementations by IBM, Microsoft, and Oracle being the major players. NoSQL databases tend to be custom-built using low-level languages and lack many of the standards of existing databases. This has not stopped the growth of NoSQL databases in large-scale, well-resourced environments. The important factor in accessing data in a secure fashion is in the correct employment of programming structures and frameworks to abstract the access process. Methods such as inline SQL generation coupled with input validation errors is a recipe for disaster in the form of SQL injection attacks.

56 Server-Side vs. Client-Side Validation
There are advantages to verifying data elements on a client before sending to the server—namely, efficiency. Doing checks on the client saves a round-trip, and its delays, before a user can be alerted to a problem. This can improve usability of software interfaces. The client is not a suitable place to perform any critical value checks or security checks. The client can change anything after the check. The data can be altered while in transit or at a proxy. In a modern client/server environment, data can be checked for compliance with input/output requirements either on the server or on the client. There are advantages to verifying data elements on a client before sending to the server—namely, efficiency. Doing checks on the client saves a round-trip, and its delays, before a user can be alerted to a problem. This can improve usability of software interfaces. The client is not a suitable place to perform any critical value checks or security checks. The reasons for this are twofold. First, the client can change anything after the check. And second, the data can be altered while in transit or at an intermediary proxy. For all checks that are essential, either for business reasons or security, the verification steps should be performed on the server side, where the data is free from unauthorized alterations. Input validation checks can be safely performed only on the server side. Exam Tip: All input validation should be performed on the server side of the client–server relationship, where it is free from outside influence and change.

57 Code Signing (1 of 2) An important factor in ensuring that software is genuine and has not been altered is a method of testing the software integrity. The application of digital signatures to the code is a process known as code signing. Code signing involves applying a digital signature to code, providing a mechanism where the end user can verify the code integrity.

58 Code Signing (2 of 2) In addition to verifying the integrity of the code, digital signatures provide evidence as to the source of the software. Code signing rests upon the established public key infrastructure. To use code signing, a developer will need a key pair. For this key to be recognized by the end user, it needs to be signed by a recognized certificate authority.

59 Encryption Encryption is one of the elements where secure coding techniques have some unique guidance. “Never roll your own crypto” Should not write your own cryptographic algorithms. Should not implement standard algorithms by yourself. Crypto is almost impossible to invent and very hard to implement correctly.

60 Obfuscation/Camouflage
Obfuscation or camouflage is the hiding of obvious meaning from observation. Obscurity is not considered adequate security under most circumstances. Adding obfuscation or camouflage to a system to make it harder for an attacker to understand and exploit is a good thing. Obfuscated code, or code that is hard or even nearly impossible to read, is a ticking time bomb.

61 Code Reuse/Dead Code There is significant opportunity to reduce development costs through reuse. Downside of massive reuse is associated with a monoculture environment. During the design phase, decisions should be made as to the appropriate level of reuse. Dead code is code that while it may be executed, the results that it obtains are never used elsewhere in the program.

62 Memory Management Memory management encompasses the actions used to control and coordinate computer memory, assigning memory to variables and reclaiming it when no longer being used. Errors in memory management can result in a program that has a memory leak, and it can grow over time, consuming more and more resources. The routine to clean up memory that has been allocated in a program but is no longer needed is called garbage collection.

63 Use of Third-Party Libraries and SDKs
Programming today is to a great extent an exercise in using third-party libraries and software development kits (SDKs). Once code has been debugged and proven to work, rewriting it is generally not a valuable use of time. Some fairly complex routines, such as encryption, have vetted, proven library sets that remove a lot of risk from programming these functions.

64 Data Exposure Data exposure is the loss of control over data from a system during operations. Data must be protected during storage, during communication, and even at times during use. Programming team’s responsibility for data. Data can be lost to unauthorized parties (a failure of confidentiality). Data can be changed by an unauthorized party (a failure of integrity).

65 Code Quality and Testing (1 of 2)
Code quality does not end with development because the code needs to be delivered and installed both intact and correctly on the target system. Code analysis is a term used to describe the processes to inspect code for weaknesses and vulnerabilities. Both static and dynamic analyses are typically done with tools, which are much better at the detailed analysis steps needed for any but the smallest code samples.

66 Code Quality and Testing (2 of 2)
Code analysis can be performed at virtually any level of development. When the analysis is done by teams of humans reading the code, typically at the smaller unit level, it is referred to as code reviews. Code analysis should be done at every level of development because the sooner that weaknesses and vulnerabilities are discovered, the easier they are to fix.

67 Static Code Analyzers Static code analysis is when the code is examined without being executed. Can be performed on source and object code bases The term source code is typically used to designate the high-level language code. Static code analysis is frequently performed using automated tools called static code analyzers or source code analyzers. Provide advantages when checking syntax.

68 Dynamic Analysis (Fuzzing) (1 of 4)
Dynamic analysis is performed while the software is executed, either on a target or on an emulated system. Requires specialized automation to perform specific testing. Fuzzing (or fuzz testing) is a brute-force method of addressing input validation issues and vulnerabilities. Fuzzing has been used by hackers for years to find potentially exploitable buffer overflows, without any specific knowledge of the coding.

69 Dynamic Analysis (Fuzzing) (2 of 4)
Fuzz testing works perfectly fine regardless of the type of testing, white box or black box. Fuzzing is relatively new to the development scene Rapidly maturing and will soon be on nearly equal footing with other automated code-checking tools. Fuzz testing works by sending a multitude of input signals and seeing how the program handles them.

70 Dynamic Analysis (Fuzzing) (3 of 4)
Ways to classify fuzz testing: One set of categories is smart and dumb, indicating the type of logic used in creating the input values. Smart testing uses knowledge of what could go wrong and malforms the inputs using this knowledge. Dumb testing just uses random inputs. Other terms used to describe fuzzers are generation-based and mutation-based.

71 Dynamic Analysis (Fuzzing) (4 of 4)
Generation-based fuzz testing uses the specifications of input streams to determine the data streams that are to be used in testing. Mutation-based fuzzers take known good traffic and mutate it in specific ways to create new input streams for testing.

72 Stress Testing Goal of performance testing is to determine bottlenecks and performance factors for the systems under test. Load testing involves running the system under a controlled speed environment. Stress testing takes the system past this operating point to see how it responds to overload conditions. Requirements for the software under development should be expressed in a service level agreement (SLA).

73 Sandboxing Sandboxing is a term for the execution of computer code in an environment designed to isolate the code from direct contact with the target system. Sandboxes are used to execute untrusted code, code from guests, and unverified programs. The level of protection offered by a sandbox depends upon the level of isolation and mediation offered.

74 Model Verification Verification Validation
Ensuring code does what it is supposed to do. More complex than just running the program and looking for runtime errors. Program results for a given set of inputs need to match the expected results per the system model. Validation Validation is checking whether the program specification captures the requirements from the customer. Verification testing is the assurance that the code as developed meets the design requirements.

75 Compiled vs. Runtime Code
Compiled code is code that is written in one language and then run through a compiler and transformed into executable code that can be run on a system. Interpreters create runtime code that can be executed via an interpreter engine, like a Java virtual machine (JVM), on a computer system. Systems such as just-in-time compilers and bytecode interpreters blur the traditional categorizations of compilers and interpreters.

76 Secure DevOps DevOps is a combination of development and operations.
Can be considered the anti-waterfall model because rather than going from phase to phase, in DevOps, as small changes are ready to advance, they advance. Secure DevOps is the addition of security steps to the DevOps process. Just as you can add security steps to the waterfall model, or any other software development model, you can add them to DevOps as well, resulting in a secure DevOps outcome.

77 Security Automation One of the key elements of DevOps is automation.
Relies upon automation for much of its efficiencies. Security automation can do the same for security that automation has in DevOps. Automating routine and extensive processes allows fewer resources to cover more environment in a more effective and efficient manner. Automation removes the manual labor that costs money to employ, especially skilled cybersecurity personnel.

78 Continuous Integration
Continuous integration is the DevOps manner of continually updating and improving the production code base. By using high levels of automation and safety nets of automated back-out routines, continuous integration allows for testing and updating even minor changes without a lot of overhead. Whole series of smaller single-purpose integrations is run. You can isolate changes to a small manageable number, Reduces time-consuming errors and interaction errors

79 Baselining Baselining is the process of determining a standard set of functionality and performance. This is a metrics-driven item, where later changes can be compared to the baseline for performance and other figures. It is through baselining that performance and feature creep are countered by the management team. If a new feature impacts performance enough, then the new feature might be withheld.

80 Immutable Systems An immutable system is a system that, once deployed, is never modified, patched, or upgraded. If a patch or update required, the system is merely replaced with a new updated one. Typical system (one that is mutable or changeable and that is patched and updated before deployment) It is extremely difficult to conclusively know whether future changes to the system are authorized or not and whether they are correctly applied or not. Linux binaries and libraries are scattered over many directories.

81 Infrastructure as Code
Infrastructure as code is a key attribute of enabling best practices in DevOps. As systems have become larger, more complex, and more interrelated, connecting developers to implementers has created an environment of infrastructure as code, which is a version of infrastructure as a service.

82 Version Control and Change Management (1 of 2)
Version control is as simple as tracking which version of a program is being worked on, whether in dev, test, or production. Having the availability of multiple versions brings into focus the issue of change management. In traditional software publishing, a new version required a new install and fairly significant testing because the level of change could be drastic and call into question issues of compatibility, functionality, and even correctness.

83 Version Control and Change Management (2 of 2)
DevOps turned the tables on this equation by introducing the idea that developers and production work together and create in essence a series of micro-releases so that any real problems are associated with single changes and not bogged down by interactions between multiple module changes. You need a change management process that ensures all changes in production are authorized, properly tested, and if failed rolled back, as well as maintaining current accurate documentation.

84 Provisioning and Deprovisioning (1 of 2)
Provisioning is the process of assigning permissions or authorities to objects. Users can be provisioned into groups, and computer processes or threads can be provisioned to higher levels of authority when executing. Deprovisioning is the removal of permissions or authorities. In secure coding, the practice is to provision a thread to an elevated execution permission level (e.g., root) only during the time that the administrative permissions are needed.

85 Provisioning and Deprovisioning (2 of 2)
After those steps have passed, the thread can be deprovisioned back to a lower access level. This combination lowers the period of time an application is at an increased level of authority, reducing the risk exposure should the program get hijacked or hacked.

86 Chapter Summary Describe how secure coding can be incorporated into the software development process. List the major types of coding errors and their root causes. Describe good software development practices and explain how they impact application security. Describe how using a software development process enforces security inclusion in a project. Learn about application-hardening techniques.

What is the one item that could be labeled as the most wanted item in coding security quizlet?

If there's one item that could be labeled as the "most wanted" in coding security, it would be the buffer overflow. The CERT/CC at Carnegie Mellon University estimates that nearly half of all exploits of computer programs stem historically from some form of buffer overflow.

What does the term spiral method reference?

The spiral model is a risk-driven software development process model. Based on the unique risk patterns of a given project, the spiral model guides a team to adopt elements of one or more process models, such as incremental, waterfall, or evolutionary prototyping.

What is a security issue with Common Gateway Interface CGI )?

Common Gateway Interface is used to communicate between the user client and the web application. The vulnerability exists due to a bug in the use of the HTTP Proxy environment variable. This variable could allow an unauthorised redirection of traffic. This bug can be exploited when application code is running on CGI.

Is a vulnerability that has been discovered by hackers but not by the developers of the software?

The term “zero-day” refers to a newly discovered software vulnerability and the fact that developers have zero days to fix the problem because it has been — and has the potential to be — exploited by hackers.

Toplist

Neuester Beitrag

Stichworte