WTST 11 2012

TEACHING SECURITY-RELATED SOFTWARE TESTING

WTST 2012: The 11th Annual Workshop on Teaching Software Testing

January 27-29, 2012

at the Harris Institute for Assured Information

Florida Institute of Technology, Melbourne, Florida

http://wtst.org.

Software testing is often described as a central part of software security, but it has a surprisingly small role in security-related curricula. Over the next 5 years, we hope to change this. If there is sufficient interest, we hope to focus WTSTs 2012-2016 on instructional support for teaching security-related testing.

OUR GOALS FOR WTST 2012

  • Survey the domain: What should we consider as part of “security-related software testing”?
  • Cluster the domain: What areas of security-related testing would fit well together in the same course?
  • Characterize some of the key tasks:
    • Some types of work are (or should be) routine. To do them well, an organization needs a clearly defined, repeatable process that is easy to delegate.
    • Other types are cognitively complex. Their broader goals might stay stable, but the details constant change as circumstances and threats evolve.
    • And other types are centered on creating, maintaining and extending technology, such as tools to support testing.
  • Publish this overview (survey / clustering / characterization)
  • Apply for instructional development grants. We (CSTER) intend to apply for funding. We hope to collaborate with other institutions and practitioners and we hope to foster other collaborations that lead to proposals that are independent of CSTER.

UNDERLYING VIEWPOINT

The Workshop on Teaching Software Testing is concerned with the practical aspects of teaching university-caliber software testing courses to academic or commercial students.

We see software testing as a cognitively complex activity, an active search for quality-related information rather than a tedious collection of routines. We see it as more practical than theoretical, more creative than prescriptive, more focused on investigation than assurance (you can’t prove a system is secure by testing it), more technical than managerial, and more interested in exploring risks than defining processes.

We think testing is too broad an area to cover fully in a single course. A course that tries to teach too much will be too superficial to have any real value. Rather than designing a single course to serve as a comprehensive model, we think the field is better served with several designs for several courses.

We are particularly interested in online courses that promote deeper knowledge and skill. You can see our work on software testing at http://www.testingeducation.org/BBST. Online courses and courseware, especially Creative Commons courseware, make it possible for students to learn multiple perspectives and to study new topics and learn new skills on a schedule that works for them.

WHO SHOULD ATTEND

We invite participation by:

  • academics who have experience teaching courses on testing or security
  • practitioners who teach professional seminars on software testing or security
  • one or two graduate students
  • a few seasoned teachers or testers who are beginning to build their strengths in teaching software testing or security.

There is no fee to attend this meeting. You pay for your seat through the value of your participation. Participation in the workshop is by invitation based on a proposal. We expect to accept 15 participants with an absolute upper bound of 22.


INTRODUCTORY NOTES and COMMENTS

Kaner:

The Challenges of Educating Testers on Security (and Security People on Testing)

Cem Kaner

When I served on the United States Election Assistance Commission’s Technical Guidelines Development Committee, my primary responsibilities involved helping draft guidelines for certification of electronic  voting machines. (See, for example, http://www.kaner.com/pdfs/commentsOnVVSSubmittedToEAC.pdf)

The revisions of the voting systems guidelines that I reviewed all called for a testing phase in which security experts would do exploratory testing for security issues.  In contrast, there was no provision for any exploratory testing by testers. Testing by testers was routinized and, in my opinion, trivialized. As far as I could tell, any time I suggested that skilled testers could find a lot of interesting stuff if they were allowed to look, those suggestions fell on ears that were deaf, incredulous, or motivated to keep the cost, scope and results of testing predictable.
Why should security testing be exploratory and testing associated with all other quality characteristics be pre-announced, repetitious, and routine? The answer that I often got was that (to quote Richard’s title): “Security Testing != Testing.”

With that came a belief that testers couldn’t make good use of the freedom that we seem to take for granted in security work.
The bias continues to appear in graduate degree programs on information security. My understanding is that most (or all?) of these degree programs have no courses on security-related testing.

Let me suggest that there is a continuum of need:

  • We need some number of security gurus who have deep knowledge of system internals and a deep technical understanding of risks. But there is a limited supply of these people and it will stay limited.
  • Is there a place for people who have less knowledge (much more than zero), less technical skill (much more than zero), but a creative, skeptical, critical mindset? Are there weaknesses in systems that such people could hunt effectively?

There are drones in testing who can’t do anything without a script or who can’t design any test without a detailed specification. But there are drones in security too, people who latch on to penetration testing tools or “best practices” or standards that they don’t understand.

We’ll probably discuss cognitive psychology throughout the workshop, so I’ll skip it here. Instead, in this talk, I want to focus on techniques.

High Volume Automated Testing

In particular, I want to highlight a collection of testing techniques that are like fuzzing in the sense that they hammer the program with long sequences of tests that are created algorithmically. However, where I come from, fuzzing is “dumb monkey” testing. Dumb monkeys are useful—this is not a derogatory term. But they have no oracle, no underlying insight into the program under test. Many types of intermittent failures that are hard to replicate with traditional methods are perhaps-more-effectively attacked with smart monkeys. One of the interesting questions in practical testing is how to create smarter monkeys.

I’ll sketch this area because I think it exposes a style of testing (intensely automated exploratory testing) that involves many of the skills that I suspect are involved in some skilled security testing. It also highlights techniques that are not dependent on deep understanding of operating system internals.

Testing != Debugging.

Hunting bugs doesn’t necessarily entail fixing them. Or knowing how to fix them. Or understanding the system internals that make them possible.

I wonder what other techniques we could teach that could be adapted to security-related testing?

Risk-Based Testing

In risk-based testing, we imagine how the software could fail, then we hunt for ways to trigger failures of those types. We can imagine functional risks, performance risks, real-time-related risks, risks of types of failures or misrepresentations that lead to litigation, risks to market impact, lots of fundamentally different types of risks.

I suspect that we (us, here at WTST) can imagine security-related risks too. I think there’s a big literature on this (I don’t claim to have read it). If that’s true, I suspect that we can imagine ways to train testers to hunt for some of the bugs those risks point to.

  • Not to create risk-inspired test scripts that become obsolete almost immediately.
  • To create instances from a family of tests (e.g. one risk inspires one family). To create specific instances that are particularly relevant now (when they are created / used) based on what else we have learned about the program and its risks.

How much of this work could be done by someone who is not a security guru? How much training will this person need? What of that training should be supplied in the security testing course?

What Other Perspectives Could Lead to Useful Training?


ATTENDEES

We are still confirming attendees and finalizing presentation topics. The following is a partial listing:

ATTENDEE AFFILIATION PRESENTATION ABSTRACT (if applicable)
Balasooriya, Janaka Arizona State University
Bolton, Michael DevelopSense
Carvalho, Marco Florida Tech
Fiedler, Rebecca Kaner, Fiedler, & Associates, LLC
Fioravanti, Mark Dept of Homeland Security and Florida Tech
Ford, Richard Florida Tech Security Testing ! = Testing

In many development organizations, the security “expert” is a wizard, paid top dollar, and held in high regard. In contrast, testers seldom reach this status, and are all too often seen as a step “below” the programmers in the organization. A tester and a security tester are not fungible… not even close. Despite this being a caricature, it does represent a fairly common perception of these roles, and begs the question of the difference in skillset between a competent tester and a competent security tester. Against the backdrop of a massive shortage in qualified security experts, the continuum of security testing levels are described, ranging from the simple to the esoteric, and the skills, knowledge, and analytical abilities discussed. By the end of the session, some of the key differences in perception and in actual skills will have been highlighted, and a list of the challenges in and returns of training testers to be more security savvy outlined.

Gallagher, Keith Florida Tech
Gentleman, Morven Dalhousie University
Hoffman, Daniel University of Victoria Worlds in Collision: Ethernet and the Factory Floor

Over the past ten years industrial control systems have seen a significant increase in the use of computer networks and related Internet technologies to transfer information from the plant floor to business computer systems. For example, most industrial plants now use networked servers to allow business users to access real-time data from the distributed control systems (DCS) and programmable logic controllers (PLC). There are also many other possible business/process interfaces, such as using remote Windows sessions to the DCS, or direct file transfer from PLCs to spreadsheets. Regardless of the method, each involves a network connection between the process and the business systems.

At the same time, there has been an explosion in the use of Ethernet and TCP/IP for process control networks. For many years the control systems used proprietary industrial networks, giving them a considerable degree of protection from the outside world. Today many DCS and PLC systems use protocols like Ethernet, TCP/IP, and HTTP as a critical component of their architecture, resulting in easier interfacing at the cost of less isolation and security.

While technologies such as Ethernet and TCP/IP have made the interfacing
of industrial equipment much easier, there is now significantly less isolation from the outside world. Network security problems from the business network can be passed on to the process network, putting industrial production and human safety at risk.

Kabbani, Nawwar Florida Tech Security Testing of SOA Systems

Service-Oriented Architecture (SOA) is a paradigm that organizes and uses distributed computing capabilities to bring together a technical solution to a business problem. However, despite the large and increasing dependency on SOA by the enterprise, testing SOA systems is still a nascent and immature field. In particular, testing those systems from a security perspective is an essential yet underserved activity.

SOA has its own specific characteristics and attributes that raise unique challenges to SOA testing and make some of the techniques used in testing SOA applications different from traditional software testing techniques.

SOA is often implemented using Web services, which have two main flavors: traditional SOAP services, and the newer lightweight ReSTful services. Either case, they inherit a lot of characteristics from common traditional Web applications, and are susceptible to similar security vulnerabilities. In fact, many of the known Web systems vulnerabilities and attacks are also applicable to SOA; for example, SQL injections, buffer overflows vulnerabilities, and session hijacking. As a result, many of the techniques used to test Web security can also be used in Web services testing.

Security testing can be done by simulating what a hacker might do to breach the system security. This may be done by using a list of well-known attacks, or quick tests, trying to reveal a potential one of well-known security vulnerabilities. This is usually followed up with a series of exploratory tests to find the best way to exploit the vulnerability with maximum impact. This kind of testing is usually referred to as penetration testing. Fuzzing is also a technique that is commonly used for security testing using a random or semi-random generation of tests.

Some SOA testing tools are advertised that they come with a library of automated penetration attacks that can allegedly make security testing fast and easy. We will show that these pre-built automated security tests tend to be ineffective without active human role in the design and evaluation of the tests. In fact, effective security testing has to be done in a (highly) exploratory style.

Kaner, Cem Florida Tech The Challenges of Educating Testers on Security (and Security People on Testing)

See Introductory Notes & Comments above

Kelly, Michael DeveloperTown Workshop facilitator
Knowles, Ben Dell SecureWorks Vulernability Lifecycle for Testers

This presentation will help bridge the knowledge/culture gap between the testing experts and security experts who will be attending WTST.

Download the presentation below:

Mayron, Liam Florida Tech
Oliver, Carol Florida Tech
Shah, Bharat Lockheed
Tabor, Greg FedEx
Weber, Jens University of Victoria The PPP approach to security testing education: Problem-based, Project-oriented, Peer-driven

Dr. Weber has strong teaching interests in the area of software security and security engineering. At UVic, he has developed a course in this area (entitled SENG 360 – Security Engineering) and has been teaching it for the last five years (See course outline at http://web.uvic.ca/calendar2011/CDs/SENG/360.html). SENG 360 is a mandatory course in
the accredited Bachelor of Software Engineering (BSEng) degree program and security-related testing is part of the learning objectives for this course.

Dr. Weber’s talk title will be “The PPP approach to security testing education: Problem-based, Project-oriented, Peer-driven.” For more details about Dr. Weber’s background, interests, and the course, see this document..

HOW THE MEETING WILL WORK

WTST is a workshop, not a typical conference.

  • We will have a few presentations, but the intent of these is to drive discussion rather than to create an archivable publication.
    • We are glad to start from already-published papers, if they are presented by the author and they would serve as a strong focus for valuable discussion.
    • We are glad to work from slides, mindmaps, or diagrams.
  • Some of our sessions will be activities, such as brainstorming sessions, collaborative searching for information, creating examples, evaluating ideas or workproducts and lightning presentations (presentations limited to 5-minutes, plus discussion).
  • In a typical presentation, the presenter speaks 10 to 90 minutes, followed by discussion. There is no fixed time for discussion. Past sessions’ discussions have run from 1 minute to 4 hours. During the discussion, a participant might ask the presenter simple or detailed questions, describe consistent or contrary experiences or data, present a different approach to the same problem, or (respectfully and collegially) argue with the presenter.

Our agenda will evolve during the workshop. If we start making significant progress on something, we are likely to stick with it even if that means cutting or timeboxing some other activities or presentations.

Presenters must provide materials that they share with the workshop under a Creative Commons license, allowing reuse by other teachers. Such materials will be posted at http://wtst.org.

HOSTS

The hosts of the meeting are:

LOCATION AND TRAVEL INFORMATION

We will hold the meetings at

Harris Center for Assured Information, Room 327

Florida Tech, 150 W University Blvd,

Melbourne, FL

Airport

Melbourne International Airport is 3 miles from the hotel and the meeting site. It is served by Delta Airlines and US Airways. Alternatively, the Orlando International Airport offers more flights and more non-stops but is 65 miles from the meeting location.

Hotel

We recommend the Courtyard by Marriott – West Melbourne located at 2101 W. New Haven Avenue in Melbourne, FL.

Please call 1-800-321-2211 or  321-724-6400 to book your room by January 2. Be sure to ask for the special WTST rates of $89 per night. Tax is an additional 11%.

All reservations must be guaranteed with a credit card by January 2, 2010 at 6:00 pm. If rooms are not reserved, they will be released for general sale. Following that date reservations can only be made based upon availability.

For additional hotel information, please visit the travel information page on this site or the hotel website at http://www.marriott.com/hotels/travel/mlbch-courtyard-melbourne-west/

OUR INTELLECTUAL PROPERTY AGREEMENT

We expect to publish some outcomes of this meeting. Each of us will probably have our own take on what was learned. Participants (all people in the room) agree to the following:

  • Any of us can publish the results as we see them. None of us is the official reporter of the meeting unless we decide at the meeting that we want a reporter.
  • Any materials initially presented at the meeting or developed at the meeting may be posted to any of our web sites or quoted in any other of our publications, without further permission. That is, if I write a paper, you can put it on your web site. If you write a problem, I can put it on my web site. If we make flipchart notes, those can go up on the web sites too. None of us has exclusive control over this material. Restrictions of rights must be identified on the paper itself.
    • NOTE: Some papers are circulated that are already published or are headed to another publisher. If you want to limit republication of a paper or slide set, please note the rights you are reserving on your document. The shared license to republish is our default rule, which applies in the absence of an asserted restriction.
  • The usual rules of attribution apply. If you write a paper or develop an idea or method, anyone who quotes or summarizes you work should attribute it to you. However, many ideas will develop in discussion and will be hard (and not necessary) to attribute to one person.
  • Any publication of the material from this meeting will list all attendees as contributors to the ideas published as well as the hosting organization.
  • Articles should be circulated to WTST-2012 attendees before being published when possible. At a minimum, notification of publication will be circulated.
  • Any attendee may request that his or her name be removed from the list of attendees identified on a specific paper.
  • If you have information which you consider proprietary or otherwise shouldn’t be disclosed in light of these publication rules, please do not reveal that information to the group.

ACKNOWLEDGEMENTS

Support for this meeting comes from the Harris Institute for Assured Information at the Florida Institute of Technology, and Kaner, Fiedler & Associates, LLC.

Funding for WTST 1-5 came primarily from the National Science Foundation , under grant EIA-0113539 ITR/SY+PE “Improving the Education of Software Testers.” Partical funding for the Advisory Board meetings in WTST 6-10 came from the the National Science Foundation, under grant CCLI-0717613 “Adaptation & Implementation of an Activity-Based Online or Hybrid Course in Software Testing”.

Opinions expressed at WTST or published in connection with WTST do not recessarily reflect the views of NSF.

WTST is a peer conference in the tradition of the Los Altos Workshops of Software Testing.