Script Check Engine for XCCDF
Script Check Engine (SCE) is an alternative check engine that can be used to evaluate a XCCDF (The Extensible Configuration Checklist Description Format) Benchmark, which is a SCAP component for expressing security checklists.

What is SCAP?

Security Content Automation Protocol (SCAP) [1] is a suite of interoperable specification that standardize format by which software flaw and configuration information is communicated. SCAP is a multi-purpose framework, it consist of 11 components that are grouped in 5 categories: Languages, Reporting formats, Enumerations, Measurement/Scoring system and Integrity.

The SCAP language specifications consists of these components:

  • Extensible Configuration Checklist Description Format (XCCDF) -  for authoring security checklists/benchmarks and for reporting results of evaluation

  • Open Vulnerability and Assessment Language (OVAL) - automated checks that assess the system

  • Open Checklist Interactive Language (OCIL) - checks that collect information from people

In scope of SCAP world, any security checklist expressed in XCCDF format can be evaluated by utilizing either OVAL or OCIL. Which is nice, you have an option between machine checks and human questionare. Everything is standardized, secure, XML-based and designed for enterprise environment. Should we break it with shell scripts again?


Well, there are few drawbacks of the XCCDF->OVAL, OCIL approach.

The first is complexity of OVAL content authoring as it is a declarative language. OVAL Element IDs and reference handling is simple to manage by a machine-parser but quite difficult and error prone for a human. XML syntax is also an issue. If you have a huge file, elements are always widely spaced along the file, maintenance gets exponentially harder as the file growns.

Second, to write an OVAL content you have to be familiar with the language and as it is very specific, you can’t transfer your existing skills from other languages easily. You must understand how it works which also require reading mailing lists and documentation - the learning curve is quite steep. If you are in a situation when you have to write a custom check that could be just a line in bash or 30 lines in OVAL you are probably going to think whether the OVAL route is worth it. Especially if you have deadlines to meet.

Next issue you might come into is that you want to write a check but unfortunately an OVAL object/state that can assess such a information from a system is missing from the specification. You don’t have to give up in this case as OVAL is an open standard. You can write to the OVAL Developer List and propose a new Object that will solve the problem. Obviously, it will take some time until new release of the standards is out and tools are updated.

Also, if you already have checks that you want to reuse you are unlikely to be able to port them to OVAL easily. Chances are you will have to create them from scratch.

My impression is that these drawbacks create a barrier in SCAP adoption in certain environments and use cases. For example Fedora - the development in Fedora Project is so fast, that standards are always a step behind. On the other hand, it would be very nice if there is a community interested in creation of Security Checklists and Profiles for Fedora. How we get through that? We need another option beside OVAL and OCIL that will be more suitable for prototyping, reuse and bleeding edge development.

And that’s why we introduced SCE.

How does it work?

One of the goals was to make it as simple as possible and to avoid making decisions for the user. Therefore we chose to support everything that is executable from the command line (with shebangs, or even Linux binaries). This should allow complete freedom. While we realize that this doesn't enforce any standards and will make collaboration on checking scripts harder, we believe that it's up to the content creation projects to enforce these standards.

The new Script Check Engine is registered with a namespace of our choice "", the namespace URI matches URL of a wiki page describing SCE. When you reference content in your XCCDF you simply use that namespace as the "system" attribute and use path to the script as the "href" attribute.

Reporting the XCCDF result

Before we run the scripts, we set the environment variables to feed them XCCDF variables and possible exit codes (XCCDF_RESULT_* variables). Scripts run, optionally print something to their stdout/stderr (we collect both of these, see section Script output) and they finish with an exit code (exit(exit_code) in C, sys.exit(exit_code) in Python, ...). This exit code is mapped to the xccdf_test_result_type_t enum .

Reporting reasons when check fails

Usually, when a script fails, you want to know why that happened so that you can correct the cause. If the scripts only returned the final result, this would have been difficult. This is why we redirect stdout and stderr and capture the output. We put this output to SCE result file where it can be reused for the final XHTML report.


XCCDF snip

<Rule id="rule-20" selected="true">
       <xhtml:pre xmlns:xhtml="">Checks if you have SELinux enabled and monitors any booleans changes.</xhtml:pre>
   <check system="">
       <check-content-ref href="" />

#!/usr/bin/env bash


if [[ $SELINUX_MODE != "Enforcing" ]]
    echo "Selinux is in "$SELINUX_MODE" mode."
    echo "Using Enforing mode is highly recommended. See selinux manual page for switching to Enforcing mode."




First experimental support of SCE is available in openscap-0.8.1[2]. If you want to give it a try we recommend installing scap-workbench-0.6.3[3] tool and use content that is provided by openscap-content-sectool package. This content was extracted from a program named sectool[4]. Sectool is a simple security audit tool driven by shell scripts!

If you prefer command line interface, there is a oscap tool from openscap-utils package.

# oscap xccdf eval --profile Server --results res.xml --sce-results --report report.html /usr/share/openscap/sectool-sce/sectool-xccdf.xml

Sample report.html is available at:


Script Check Engine allows any script to be used as a check for a  XCCDF rule. It provides an light-weight alternative to the official SCAP check engines (OVAL, OCIL). There are various use cases where might be shell scripts usage handy:

  • shell script check can be a prototype before valid OVAL test is written

  • existing checking scripts can be used with XCCDF before their OVAL counterparts are written

  • shell script can handle cases where needed OVAL object does not exist

  • shell script approach could be used in environments where compliance check requirements do not insist on pure SCAP solution (XCCDF + OVAL)

Advantages of SCE approach are that the content creation is much faster and maintenance is easier. On the other side - malicious content may cause severe damage. It’s important to always use trustful content only. Content portability depends on portability of the scripts themselves.

We believe that SCE might be important step forward in broader SCAP adoption and we hope to see comprehensive Security Checklist being created for various linux distributions.




Peter Vrabec <>
Martin Preisler <>

SCAP based security scanner
One of the cool features of upcoming Fedora 14 release is  support for the
SCAP (Security Content Automation Protocol) . What is SCAP? SCAP is a line of
standards managed by the National Institute of Standards and Technology
(NIST).  It provides a standardized approach to maintaining the security of
systems, such as automatically verifying the presence of patches, checking
system security configuration settings, and examining systems for signs of
compromise. [1]

Goal of this blog is not a general SCAP introduction. But I would rather
describe the most similar, common and useful use case of the OpenSCAP[2] in
Fedora14. This use case will be the security configuration scan.

What we are gonna need for the scan? We need oscap command tool, which is part
of openscap-utils package, and kind of input data called "content" in SCAP
terminology. To be more accurate you need OVAL and XCCDF content.

OVAL stands for Open Vulnerability and Assessment Language. It
standardizes the three main steps of the assessment process: representing
configuration information of systems for testing; analyzing the system
for the presence of the specified machine state (vulnerability, configuration,
patch state); and reporting the results of this assessment. The other one -
XCCDF - stands for The eXtensible Configuration Checklist Description Format.
It's a language for writing security checklists, benchmarks, and related kinds
of documents.

You can get the most recent XCCDF/OVAL content for Fedora 14 from OpenSCAP
repository [3] or you can find one in the /usr/share/openscap/ directory
(openscap-utils package). What you are looking for is scap-fedora14-xccdf.xml
and scap-fedora14-oval.xml file.

Before we start the system evaluation, someone might be interested what
checks/tests are gonna be performed. This information can be found in XCCDF
content(scap-fedora14-xccdf.xml). I would not recommend reading plain xml file,
you rather generate nice html document for this purpose.

$ oscap xccdf generate guide --profile F14-Desktop  scap-fedora14-xccdf.xml >

Open guide.html in web browser and study it. You will soon realize that there
are many many rules in provided content and not all of them conform to default
Fedora configuration. Therefore we created F14-Desktop profile which consist
only of rules that are relevant to default Fedora configuration.

Now we can jump into evaluation.

# oscap xccdf eval --profile F14-Desktop --result-file xccdf-results.xml scap-

This operation might take a while, it depends mostly on size of you local
filesystems, so you have time for deeper examination of guide.html again.

Evaluation of each rule can end up with following results:
 - Pass - Everything OK
 - Fail - System configuration is in different state than expected
 - Unknown - Test can't be automated
 - Error - Problem with checking engine (are you running oscap as root?)

When the evaluation is completed the results are stored in xccdf-results.xml
file. I suppose it's worth to convert it into more human readable
representation again.

$ oscap xccdf generate report xccdf-results.xml > report.html

And that's it. I described what we need, where we get it, how to read the
content, run evaluation and how we interpret results. I reckon this might be
enough so far. If you have any question please don't hesitate to contact us on
our mailing list [4].

[4]: open-scap-list(at)


Log in