<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Tech Blog</title>
    <description>I am Andrew Kroh, a software engineer from northern Virginia.
</description>
    <link>https://www.andrewkroh.com/</link>
    <atom:link href="https://www.andrewkroh.com/feed.xml" rel="self" type="application/rss+xml"/>
    <pubDate>Sun, 01 Feb 2026 21:18:54 +0000</pubDate>
    <lastBuildDate>Sun, 01 Feb 2026 21:18:54 +0000</lastBuildDate>
    <generator>Jekyll v3.8.6</generator>
    
      <item>
        <title>Centralized Logging with journald and Standard Linux Tools</title>
        <description>&lt;p&gt;Every modern Linux distribution ships with journald as its logging system. It
captures logs from system services, the kernel, and applications in a
structured, indexed format. Rather than fighting this or replacing it with
something else, I decided to embrace journald as the foundation of my
centralized logging architecture.&lt;/p&gt;

&lt;p&gt;The goal is simple: collect logs from all my systems into a central location,
store them in a stable format that will be readable for years, and be able to
load them into tools like Elasticsearch whenever I need to analyze them. No
vendor lock-in, no proprietary formats, just plain JSON files on disk.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/uploads/2025/journald-data-flow.jpeg&quot; alt=&quot;journald-data-flow.jpeg&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The architecture has three parts: getting data into journald, shipping it to a
central server, and storing it durably.&lt;/p&gt;

&lt;h4 id=&quot;getting-data-into-journald&quot;&gt;Getting Data Into journald&lt;/h4&gt;

&lt;p&gt;Most applications already write to journald by default when running as systemd
services. The interesting cases are containers and network devices.&lt;/p&gt;

&lt;h5 id=&quot;docker-and-container-orchestrators&quot;&gt;Docker and Container Orchestrators&lt;/h5&gt;

&lt;p&gt;Docker provides a journald logging driver that sends container logs directly to
the host's journal. Both HashiCorp Nomad and Kubernetes can use this driver. The
key benefit is that container logs gain all the structured metadata that
journald provides, plus you can add your own.&lt;/p&gt;

&lt;p&gt;Here's how I configure a Nomad task to use the journald logging driver:&lt;/p&gt;

&lt;div class=&quot;language-hcl highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nx&quot;&gt;task&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;hello-world&quot;&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
  &lt;span class=&quot;nx&quot;&gt;driver&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;docker&quot;&lt;/span&gt;

  &lt;span class=&quot;nx&quot;&gt;config&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;nx&quot;&gt;image&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;akroh/hello-world:v3&quot;&lt;/span&gt;

    &lt;span class=&quot;nx&quot;&gt;labels&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
      &lt;span class=&quot;nx&quot;&gt;owner&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;website@example.com&quot;&lt;/span&gt;
    &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;

    &lt;span class=&quot;nx&quot;&gt;logging&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
      &lt;span class=&quot;nx&quot;&gt;type&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;journald&quot;&lt;/span&gt;
      &lt;span class=&quot;nx&quot;&gt;config&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
        &lt;span class=&quot;nx&quot;&gt;tag&lt;/span&gt;    &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;${NOMAD_JOB_NAME}&quot;&lt;/span&gt;
        &lt;span class=&quot;nx&quot;&gt;labels&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;owner&quot;&lt;/span&gt;
        &lt;span class=&quot;nx&quot;&gt;env&lt;/span&gt;    &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;NOMAD_ALLOC_ID,NOMAD_JOB_NAME,NOMAD_TASK_NAME,NOMAD_GROUP_NAME,NOMAD_NAMESPACE,NOMAD_DC,NOMAD_REGION,NOMAD_ALLOC_INDEX,NOMAD_ALLOC_NAME&quot;&lt;/span&gt;
      &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
    &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code class=&quot;highlighter-rouge&quot;&gt;tag&lt;/code&gt; option sets the &lt;code class=&quot;highlighter-rouge&quot;&gt;SYSLOG_IDENTIFIER&lt;/code&gt; field, making it easy to filter
logs by job name. The &lt;code class=&quot;highlighter-rouge&quot;&gt;labels&lt;/code&gt; option includes Docker labels as journal fields,
and &lt;code class=&quot;highlighter-rouge&quot;&gt;env&lt;/code&gt; includes the specified environment variables. This metadata becomes
invaluable when you need to correlate logs across services.&lt;/p&gt;

&lt;p&gt;When a container writes a log line, it appears in the journal with all this
context:&lt;/p&gt;

&lt;div class=&quot;language-json highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;CONTAINER_ID&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;0f07251e631e&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;CONTAINER_NAME&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;server-c10085f7-a59f-4132-3f01-c9050a0303bd&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;CONTAINER_TAG&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;phone-notifier&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;OWNER&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;website@example.com&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;IMAGE_NAME&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;docker.example.com/phone-notifier:v12&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;MESSAGE&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;10.100.8.98 - - [18/Dec/2025:22:35:06 +0000] &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;POST /v1/notify HTTP/1.1&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt; 200 598&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;NOMAD_ALLOC_ID&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;c10085f7-a59f-4132-3f01-c9050a0303bd&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;NOMAD_DC&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;va&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;NOMAD_JOB_NAME&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;phone-notifier&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;NOMAD_TASK_NAME&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;server&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;PRIORITY&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;6&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;SYSLOG_IDENTIFIER&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;phone-notifier&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;_HOSTNAME&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;compute01.va.local.example.com&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;_TRANSPORT&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;journal&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h5 id=&quot;network-devices-via-rsyslog&quot;&gt;Network Devices via rsyslog&lt;/h5&gt;

&lt;p&gt;Network devices like switches, routers, and IP phones typically send logs via
syslog over UDP. You can configure rsyslog to receive these and forward them
into journald, preserving the original message and adding metadata.&lt;/p&gt;

&lt;p&gt;Create &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc/rsyslog.d/10-syslog.conf&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;module(load=&quot;omjournal&quot;)
module(load=&quot;imudp&quot;)

input(type=&quot;imudp&quot; port=&quot;514&quot; ruleset=&quot;syslog-to-journald&quot;)

template(name=&quot;journal&quot; type=&quot;list&quot;) {
  constant(value=&quot;udp_514&quot; outname=&quot;TAGS&quot;)
  property(name=&quot;rawmsg&quot; outname=&quot;MESSAGE&quot;)
  property(name=&quot;fromhost-ip&quot; outname=&quot;LOG_SOURCE_IP&quot;)
}

ruleset(name=&quot;syslog-to-journald&quot;){
    action(type=&quot;omjournal&quot; template=&quot;journal&quot;)
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This configuration listens on UDP port 514, preserves the raw syslog message
without parsing it, and adds the source IP address as metadata. The &lt;code class=&quot;highlighter-rouge&quot;&gt;TAGS&lt;/code&gt; field
helps identify the log path when you have other rsyslog configuration.&lt;/p&gt;

&lt;h4 id=&quot;getting-data-out-of-journald&quot;&gt;Getting Data Out of journald&lt;/h4&gt;

&lt;p&gt;The &lt;code class=&quot;highlighter-rouge&quot;&gt;systemd-journal-upload&lt;/code&gt; service is a built-in tool that ships journal
entries to a remote server over HTTP. It runs as a daemon, watches for new
entries, and pushes them as they arrive.&lt;/p&gt;

&lt;p&gt;The service maintains a cursor that tracks the last successfully uploaded entry.
This cursor is persisted to disk, so if the service restarts or loses
connectivity, it resumes from where it left off without duplicating or losing
entries.&lt;/p&gt;

&lt;p&gt;Install &lt;code class=&quot;highlighter-rouge&quot;&gt;systemd-journal-remote&lt;/code&gt; (it provides &lt;code class=&quot;highlighter-rouge&quot;&gt;systemd-journal-upload&lt;/code&gt;) using
your distro's package manager, then configure the upload target.&lt;/p&gt;

&lt;p&gt;Create or edit &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc/systemd/journal-upload.conf&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-ini highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nn&quot;&gt;[Upload]&lt;/span&gt;
&lt;span class=&quot;py&quot;&gt;URL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;http://logs.example.com:19532&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Enable and start the service:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;systemctl &lt;span class=&quot;nb&quot;&gt;enable&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--now&lt;/span&gt; systemd-journal-upload.service
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h5 id=&quot;the-http-protocol&quot;&gt;The HTTP Protocol&lt;/h5&gt;

&lt;p&gt;The upload service sends journal entries using a straightforward HTTP protocol:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Endpoint&lt;/strong&gt;: &lt;code class=&quot;highlighter-rouge&quot;&gt;POST /upload&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Content-Type&lt;/strong&gt;: &lt;code class=&quot;highlighter-rouge&quot;&gt;application/vnd.fdo.journal&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Transfer-Encoding&lt;/strong&gt;: chunked&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The body contains entries in the
&lt;a href=&quot;https://systemd.io/JOURNAL_EXPORT_FORMATS/#journal-export-format&quot;&gt;journal export format&lt;/a&gt;.
Each entry is a series of field-value pairs separated by newlines, with entries
separated by blank lines. Text fields use the format &lt;code class=&quot;highlighter-rouge&quot;&gt;FIELD_NAME=value\n&lt;/code&gt;, while
binary fields use a length-prefixed format.&lt;/p&gt;

&lt;p&gt;The chunked transfer encoding allows the client to stream entries continuously.
When there are no more entries to send, the client completes the request and
waits for acknowledgment before updating its cursor.&lt;/p&gt;

&lt;h4 id=&quot;receiving-and-storing-logs&quot;&gt;Receiving and Storing Logs&lt;/h4&gt;

&lt;p&gt;The systemd project provides &lt;code class=&quot;highlighter-rouge&quot;&gt;systemd-journal-remote&lt;/code&gt; as a companion to the
upload service. It receives uploads and writes them into local journal files.
This works well if you want to query logs using &lt;code class=&quot;highlighter-rouge&quot;&gt;journalctl&lt;/code&gt; on the central
server.&lt;/p&gt;

&lt;p&gt;I use a custom receiver that takes a different approach. Instead of storing logs
in the journal export format, it writes them as NDJSON (newline-delimited JSON)
files. The receiver:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Accepts HTTP uploads from journal-upload clients&lt;/li&gt;
  &lt;li&gt;Parses the journal export format and converts entries to JSON&lt;/li&gt;
  &lt;li&gt;Appends to a write-ahead log (WAL) and fsyncs for durability&lt;/li&gt;
  &lt;li&gt;Periodically flushes the WAL to compressed NDJSON files partitioned by date and hour&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The output is zstd-compressed NDJSON files organized by region and time:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;logs/
  region=va/
    source=journald/
      dt=2025-12-18/
        hour=14/
          events-019438a2-7b3c-7def-8123-456789abcdef.ndjson.zst
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Filenames use UUIDv7 identifiers, which embed a timestamp and random component.
This allows multiple receiver instances to write files concurrently without
coordination, enabling horizontal scaling when needed.&lt;/p&gt;

&lt;p&gt;Each line in these files is a complete JSON object with the original journal
fields plus metadata like the source region and timestamp. This forms what's
sometimes called the &quot;bronze layer&quot; in a data lake architecture.&lt;/p&gt;

&lt;h5 id=&quot;why-ndjson&quot;&gt;Why NDJSON?&lt;/h5&gt;

&lt;p&gt;The archival format matters for long-term retention. NDJSON has several
properties that make it ideal:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Readable with standard tools&lt;/strong&gt;: &lt;code class=&quot;highlighter-rouge&quot;&gt;zstdcat file.ndjson.zst | jq .&lt;/code&gt; works today and will work in 20 years.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;No schema migrations&lt;/strong&gt;: JSON is self-describing. Fields can be added without breaking anything.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Open compression&lt;/strong&gt;: zstd is open-source, well-documented, and widely supported.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Easy to rehydrate&lt;/strong&gt;: Decompress, parse JSON, bulk-load into Elasticsearch or any other tool.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Extensible to cloud storage&lt;/strong&gt;: The same file layout works on local disk, NFS, or S3.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach decouples collection and archival from analysis. You can bulk-load
historical data into Elasticsearch whenever you need it, write custom scripts to
process the JSON directly, or reprocess the archive with updated parsing logic
years later. The data remains accessible regardless of how your tooling evolves.&lt;/p&gt;

&lt;h4 id=&quot;references&quot;&gt;References&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.freedesktop.org/software/systemd/man/latest/systemd-journal-upload.service.html&quot;&gt;systemd-journal-upload.service&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.freedesktop.org/software/systemd/man/latest/systemd-journal-remote.service.html&quot;&gt;systemd-journal-remote.service&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://systemd.io/JOURNAL_EXPORT_FORMATS/&quot;&gt;Journal Export Formats&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.freedesktop.org/software/systemd/man/latest/journal-upload.conf.html&quot;&gt;journal-upload.conf&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://dataengineering.wiki/Concepts/Data+Architecture/Medallion+Architecture&quot;&gt;Medallion Architecture&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
        <pubDate>Fri, 19 Dec 2025 02:00:00 +0000</pubDate>
        <link>https://www.andrewkroh.com/linux/2025/12/19/centralized-logging-with-journald.html</link>
        <guid isPermaLink="true">https://www.andrewkroh.com/linux/2025/12/19/centralized-logging-with-journald.html</guid>
        
        <category>logging</category>
        
        <category>journald</category>
        
        <category>systemd</category>
        
        
        <category>linux</category>
        
      </item>
    
      <item>
        <title>Designing wall mounts for the Odroid HC2</title>
        <description>&lt;p&gt;When I purchased Hard Kernel's Odroid HC2, I assumed I
would find a tidy way to mount the device along side my other computer and
network gear. I considered building a mount or shelf from wood; I also considered
drilling holes in the heatsink to hang the device from a screw-in hook, but
none of these solutions were low profile or sleek. This led me down the path of
designing and printing my first 3D part.&lt;/p&gt;

&lt;p&gt;The &lt;a href=&quot;https://www.hardkernel.com/shop/odroid-hc2-home-cloud-two/&quot;&gt;Odroid HC2&lt;/a&gt; is
a small computer attached to a heatsink that accepts a single 3.5&quot; hard disk.
It's a low cost (~$50), low power (~10W) way to attach storage to your network.
I am using three of them together to provide redundancy.&lt;/p&gt;

&lt;p&gt;I'd never used a CAD program nor did I have access to a 3D printer when I
decided I wanted to make my own mount. And thanks to the pandemic of 2020, the
&quot;maker space&quot; at the public library was closed, so I couldn't try 3D printing
before investing in my own equipment.&lt;/p&gt;

&lt;p&gt;Before purchasing a printer, I wanted to be sure I could learn to use a CAD
program. I watched a few YouTube tutorials on the subject and then gave design a
try with Autodesk Fusion 360. Below is the haphazardly designed first attempt.
It was enough to convince me that with more time I could produce a better
design. So I ordered a Prusa Mini printer.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/uploads/2020/odroid-hc2-v1-transparent.png&quot; alt=&quot;v1_render&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The printer kit arrived in December after an 8 week lead time. I dusted off the
earlier design and prepared it for printing. I exported the 3D design to an STL
file and ran that through &lt;a href=&quot;https://github.com/prusa3d/PrusaSlicer&quot;&gt;Prusa Slicer&lt;/a&gt;
to create a g-code file (the raw instructions to the printer for creating the
part). I loaded the g-code file to a USB stick and commenced printing with PLA
filament. It was rewarding to have something I designed come to reality in just
25 minutes (even if it's an ugly mess).&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/uploads/2020/odroid-hc2-mount-v1.jpg&quot; alt=&quot;v1_printed&quot; /&gt;&lt;/p&gt;

&lt;p&gt;While this first prototype did actually fit the HC2's heatsink, it had several
problems. It wasn't symmetrical, so two different parts would be needed for the
the top and bottom. There wasn't enough clearance for screw heads. And the fit
around the HC2 was too tight.&lt;/p&gt;

&lt;p&gt;I designed a second version in about 4 hours (I had to relearn all the things I had
forgotten after the 8 week wait). I took into account the things I learned from
the first version.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/uploads/2020/odroid-hc2-v2-transparent.png&quot; alt=&quot;v2_render&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/uploads/2020/odroid-hc2-v2-with-heatsink-transparent-orange.png&quot; alt=&quot;v2_heatsink_render&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/uploads/2020/odroid-hc2-mount-v2.jpg&quot; alt=&quot;v2_printed&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The second version fit perfectly. So I loaded the ASA filament and began batch
printing the production parts. ASA filament was used because of its high heat
tolerance. The heatsink temperature is routinely 43°C so PLA filament would have
likely deformed over time.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/uploads/2020/prusa-slicer.png&quot; alt=&quot;prusa_slicer&quot; /&gt;&lt;/p&gt;

&lt;p&gt;And finally after many months of sitting on my workbench, each Odroid HC2 is
wall-mounted using four M3 screws.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/uploads/2020/odroid-hc2-mount-installed.jpg&quot; alt=&quot;wall-mount-closeup&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/uploads/2020/odroid-hc2-mount-installed-array.jpg&quot; alt=&quot;wall-mount-cluster&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This project reaffirmed my opinion that the Metric system is far superior to the
Imperial system. And I learned that Vernier calipers and a good metal ruler are
really useful tools to have around.&lt;/p&gt;

&lt;p&gt;The project files can be downloaded from
&lt;a href=&quot;https://github.com/andrewkroh/3d-designs&quot;&gt;github.com/andrewkroh/3d-designs&lt;/a&gt;.&lt;/p&gt;
</description>
        <pubDate>Fri, 25 Dec 2020 12:00:00 +0000</pubDate>
        <link>https://www.andrewkroh.com/3d-prints/2020/12/25/designing-wallmounts-for-odroid-hc2.html</link>
        <guid isPermaLink="true">https://www.andrewkroh.com/3d-prints/2020/12/25/designing-wallmounts-for-odroid-hc2.html</guid>
        
        
        <category>3d-prints</category>
        
      </item>
    
      <item>
        <title>Testing Github Pull Requests Using git Patches</title>
        <description>&lt;p&gt;Did you know that Github can provide a patch file for any pull request (PR)?
Appending &lt;code class=&quot;highlighter-rouge&quot;&gt;.patch&lt;/code&gt; to any pull request URL will get you a patch file for the
PR.&lt;/p&gt;

&lt;p&gt;If you want to locally test the changes provided in the PR then applying the
patch to your local repository can be a fast way to get the changes. You simply
need to download the patch with &lt;code class=&quot;highlighter-rouge&quot;&gt;curl&lt;/code&gt; then apply it using &lt;code class=&quot;highlighter-rouge&quot;&gt;git&lt;/code&gt;.&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;curl -L -O https://github.com/elastic/go-libaudit/pull/18.patch
git am 18.patch
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;After running those command the PR will be applied to your branch. You can see
the changes by checking the git log.&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;git log -1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
</description>
        <pubDate>Wed, 17 Jan 2018 12:00:00 +0000</pubDate>
        <link>https://www.andrewkroh.com/development/2018/01/17/testing-github-pull-requests-using-git-patches.html</link>
        <guid isPermaLink="true">https://www.andrewkroh.com/development/2018/01/17/testing-github-pull-requests-using-git-patches.html</guid>
        
        
        <category>development</category>
        
      </item>
    
      <item>
        <title>Parsing User Agent strings from Packetbeat</title>
        <description>&lt;p&gt;&lt;a href=&quot;https://www.elastic.co/products/beats/packetbeat&quot;&gt;Packetbeat&lt;/a&gt; is a open source
tool from Elastic (the makers of Elasticsearch) that analyzes network traffic in
real-time and stores the data in Elasticsearch. You can collect some interesting
data if you install Packetbeat in a location where it can see all the traffic
between your network and the Internet. I use a SPAN port on a Cisco switch to
mirror my network's traffic into Packetbeat.&lt;/p&gt;

&lt;p&gt;To get an overview of the various operating systems and browsers being used on a
network you can configure Packetbeat to collect all HTTP traffic including the
&lt;code class=&quot;highlighter-rouge&quot;&gt;User-Agent&lt;/code&gt; request header. Packetbeat collects the raw user agent string which
needs to be parsed and normalized in order to analyze which OSes and browsers
are being used. Parsing of the user agent strings can be performed by
&lt;a href=&quot;https://www.elastic.co/products/logstash&quot;&gt;Logstash&lt;/a&gt; (another product by
Elastic).&lt;/p&gt;

&lt;p&gt;Once you are collecting data you can easily visualize and explore it using
&lt;a href=&quot;https://www.elastic.co/products/kibana&quot;&gt;Kibana&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/assets/uploads/2016/kibana-user-agent-light.png&quot;&gt;&lt;img src=&quot;/assets/uploads/2016/kibana-user-agent-light.png&quot; alt=&quot;Kibana User Agents&quot; style=&quot;width: 75%;&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The data flow through my setup is Packetbeat -&amp;gt; Logstash -&amp;gt; Elasticsearch. Below
I will show example configurations that can be used for this task.&lt;/p&gt;

&lt;h5 id=&quot;packetbeat-configuration&quot;&gt;Packetbeat Configuration&lt;/h5&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-json&quot; data-lang=&quot;json&quot;&gt;&lt;span class=&quot;err&quot;&gt;interfaces:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;device:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;eth&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;with_vlans:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;kc&quot;&gt;true&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;

&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;protocols:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;http:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;ports:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;80&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;8080&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;8000&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;000&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;8002&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;send_headers:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;User-Agent&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;

&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;output:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;logstash:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;hosts:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;localhost:5044&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;

&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;logging:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;to_files:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;kc&quot;&gt;true&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;files:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;path:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;/var/log/packetbeat&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;name:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;packetbeat.log&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;level:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;info&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h5 id=&quot;logstash-configuration&quot;&gt;Logstash Configuration&lt;/h5&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;input {
  beats {
    port =&amp;gt; 5044
  }
}

filter {
  if [type] == &quot;http&quot; {
    useragent {
      # Read the user-agent field from the JSON sent by Packetbeat
      source =&amp;gt; &quot;[http][request_headers][user-agent]&quot;
      # Remove the raw request_headers since we don't need them after reading
      # the user-agent string.
      remove_field =&amp;gt; &quot;[http][request_headers]&quot;
      # Put all the of parsed user-agent data under the &quot;ua&quot; key.
      target =&amp;gt; &quot;ua&quot;
    }
  }
}

output {
  # I am using 'Found' which is Elastic's hosted Elasticsearch offering.
  elasticsearch {
    hosts =&amp;gt; &quot;xyz.us-west-1.aws.found.io:9243&quot;
    ssl =&amp;gt; true
    user =&amp;gt; &quot;readwrite&quot;
    password =&amp;gt; &quot;password&quot;
    manage_template =&amp;gt; false
    index =&amp;gt; &quot;%{[@metadata][beat]}-%{+YYYY.MM.dd}&quot;
    document_type =&amp;gt; &quot;%{[@metadata][type]}&quot;
  }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h5 id=&quot;example-output&quot;&gt;Example Output&lt;/h5&gt;

&lt;p&gt;The document that is indexed in Elasticsearch now includes a &lt;code class=&quot;highlighter-rouge&quot;&gt;ua&lt;/code&gt; field that
holds all of the parsed user-agent data.&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-json&quot; data-lang=&quot;json&quot;&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;@timestamp&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;2016-01-24T20:08:37.193Z&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;beat&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;hostname&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;beats&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;name&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;beats&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;},&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;bytes_in&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;185&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;bytes_out&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;367&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;client_ip&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;10.10.0.18&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;client_port&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;36801&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;client_proc&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;client_server&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;count&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;direction&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;out&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;http&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;code&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;301&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;content_length&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;148&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;phrase&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;Permanently&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;response_headers&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;},&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;ip&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;23.7.122.8&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;method&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;GET&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;params&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;path&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;/&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;port&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;80&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;proc&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;query&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;GET /&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;responsetime&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;45&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;server&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;status&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;OK&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;type&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;http&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;@version&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;1&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;host&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;beats&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;tags&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;beats_input_raw_event&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;],&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;ua&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;name&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;Chrome&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;os&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;Mac OS X 10.6.8&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;os_name&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;Mac OS X&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;os_major&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;10&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;os_minor&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;6&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;device&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;Other&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;major&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;12&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;minor&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;0&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;patch&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;742&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

</description>
        <pubDate>Sun, 24 Jan 2016 21:10:00 +0000</pubDate>
        <link>https://www.andrewkroh.com/beats/2016/01/24/packetbeat-logstash-user-agent.html</link>
        <guid isPermaLink="true">https://www.andrewkroh.com/beats/2016/01/24/packetbeat-logstash-user-agent.html</guid>
        
        
        <category>beats</category>
        
      </item>
    
      <item>
        <title>Managing a Firewall with Puppet when using Docker</title>
        <description>&lt;p&gt;The problem with using Docker and managing your firewall with Puppet is that you
have two competing tools trying to manage the rules in the firewall. The
puppetlabs-firewall module allows you purge all unmanaged firewall chains and
rules, and if configured to do so, puppet will purge the rules added by Docker.&lt;/p&gt;

&lt;p&gt;This is how you purge &lt;em&gt;all&lt;/em&gt; unmanaged rules:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-puppet&quot; data-lang=&quot;puppet&quot;&gt;&lt;span class=&quot;n&quot;&gt;resources&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'firewall'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;py&quot;&gt;purge&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;kc&quot;&gt;true&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;A feature added in puppetlabs-firewall 1.3 provides a method for allowing Puppet
to selectively purge unmanaged firewall rules within a particular chain. A
&lt;code class=&quot;highlighter-rouge&quot;&gt;firewallchain&lt;/code&gt; parameter named
&lt;a href=&quot;https://github.com/puppetlabs/puppetlabs-firewall/blob/01ba4b9c4ac291b51aeca1f1dc487e6607605e7d/lib/puppet/type/firewallchain.rb#L116&quot;&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;ignore&lt;/code&gt;&lt;/a&gt;
accepts regular expressions that are to be matched against firewall rules within
that chain that are to be ignored when purging.&lt;/p&gt;

&lt;p&gt;When the docker service is started it add iptables rules like this:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nt&quot;&gt;-A&lt;/span&gt; FORWARD &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; docker0 &lt;span class=&quot;nt&quot;&gt;-m&lt;/span&gt; conntrack &lt;span class=&quot;nt&quot;&gt;--ctstate&lt;/span&gt; RELATED,ESTABLISHED &lt;span class=&quot;nt&quot;&gt;-j&lt;/span&gt; ACCEPT
&lt;span class=&quot;nt&quot;&gt;-A&lt;/span&gt; FORWARD &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; docker0 &lt;span class=&quot;o&quot;&gt;!&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; docker0 &lt;span class=&quot;nt&quot;&gt;-j&lt;/span&gt; ACCEPT
&lt;span class=&quot;nt&quot;&gt;-A&lt;/span&gt; FORWARD &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; docker0 &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; docker0 &lt;span class=&quot;nt&quot;&gt;-j&lt;/span&gt; ACCEPT&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;To configure puppetlabs-firewall to ignore the docker rules when purging you can
configure your firewallchain to ignore rules containing &lt;code class=&quot;highlighter-rouge&quot;&gt;&quot;-o docker0&quot;&lt;/code&gt; and &lt;code class=&quot;highlighter-rouge&quot;&gt;&quot;-i
docker0&quot;&lt;/code&gt;. It is &lt;strong&gt;important&lt;/strong&gt; to note that you cannot purge all firewall
resources as shown above and make use of the &lt;code class=&quot;highlighter-rouge&quot;&gt;ignore&lt;/code&gt; parameter; the
implications of this are that you lose the ability to purge unmanaged
&lt;code class=&quot;highlighter-rouge&quot;&gt;firewallchain&lt;/code&gt;s and you must define a &lt;code class=&quot;highlighter-rouge&quot;&gt;firewallchain&lt;/code&gt; for each chain from which
you wish to have puppet purge rules.&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-puppet&quot; data-lang=&quot;puppet&quot;&gt;&lt;span class=&quot;n&quot;&gt;firewallchain&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'FORWARD:filter:IPv4'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;py&quot;&gt;ensure&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;present&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;py&quot;&gt;purge&lt;/span&gt;  &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;kc&quot;&gt;true&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;py&quot;&gt;ignore&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;
    &lt;span class=&quot;s1&quot;&gt;'-o docker0'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;s1&quot;&gt;'-i docker0'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;],&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;For my projects I have created &lt;a href=&quot;https://github.com/andrewkroh/puppet-base_firewall&quot;&gt;wrapper
module&lt;/a&gt; around
puppetlabs-firewall that creates a firewallchain for INPUT, OUTPUT, and FORWARD
and configures each of them to purge all rules except for those containing
docker0. I am not creating any NAT rules outside of docker so I have chosen to
not manage any of the chains within the NAT table using Puppet.&lt;/p&gt;

&lt;h4 id=&quot;related-links&quot;&gt;Related Links&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/andrewkroh/puppet-base_firewall&quot;&gt;andrewkroh/base_firewall Module&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://forge.puppetlabs.com/puppetlabs/firewall&quot;&gt;Puppetlabs Firewall Module&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://tickets.puppetlabs.com/browse/MODULES-1234&quot;&gt;MODULES-1234 - puppetlabs-firewall and Docker&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://docs.docker.com/articles/networking/&quot; title=&quot;Docker Advanced Networking&quot;&gt;Docker Advanced Networking&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
        <pubDate>Sun, 08 Mar 2015 22:30:01 +0000</pubDate>
        <link>https://www.andrewkroh.com/puppet/2015/03/08/managing-a-firewall-with-puppet-when-using-docker.html</link>
        <guid isPermaLink="true">https://www.andrewkroh.com/puppet/2015/03/08/managing-a-firewall-with-puppet-when-using-docker.html</guid>
        
        
        <category>puppet</category>
        
      </item>
    
      <item>
        <title>Vulnerability Assessment and Compliance Verification</title>
        <description>&lt;p&gt;OpenSCAP is an open source tool for performing automated vulnerability
assessment and policy compliance verification on linux. SCAP, pronounced
“ess-cap”, is the Security Content Automation Protocol which pulls together open
standards for describing vulnerabilities like CVE, CVSS, OVAL, and XCCDF. The
OpenSCAP tool, which is &lt;a href=&quot;https://nvd.nist.gov/scap/validation/128.cfm&quot;&gt;NIST
certified&lt;/a&gt;, ingests the SCAP
content and outputs a report of which checks passed and failed.&lt;/p&gt;

&lt;p&gt;Let's walkthrough an example of how to audit a RedHat 6 machine against SCAP
content provided by DISA known as the Redhat 6 STIG Benchmark.&lt;/p&gt;

&lt;p&gt;First, you need to install OpenSCAP and its dependencies (and I'm installing
wget and unzip so that I can download the STIG and unzip it).&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;yum &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;openscap-utils wget unzip
&lt;span class=&quot;o&quot;&gt;================================================================================&lt;/span&gt;
 Package               Arch   Version                Repository            Size
&lt;span class=&quot;o&quot;&gt;================================================================================&lt;/span&gt;
Installing:
 openscap-utils        x86_64 1.0.8-1.el6_5.1        rhel-x86_64-server-6  52 k
 unzip                 x86_64 6.0-1.el6              rhel-x86_64-server-6 149 k
 wget                  x86_64 1.12-5.el6_6.1         rhel-x86_64-server-6 483 k
Installing &lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;dependencies:
 bzip2                 x86_64 1.0.5-7.el6_0          rhel-x86_64-server-6  49 k
 elfutils              x86_64 0.158-3.2.el6          rhel-x86_64-server-6 233 k
 elfutils-libs         x86_64 0.158-3.2.el6          rhel-x86_64-server-6 211 k
 fakeroot              x86_64 1.12.2-22.2.el6        rhel-x86_64-server-6  73 k
 fakeroot-libs         x86_64 1.12.2-22.2.el6        rhel-x86_64-server-6  23 k
 file                  x86_64 5.04-21.el6            rhel-x86_64-server-6  47 k
 gdb                   x86_64 7.2-75.el6             rhel-x86_64-server-6 2.3 M
 libxslt               x86_64 1.1.26-2.el6_3.1       rhel-x86_64-server-6 452 k
 man                   x86_64 1.6f-32.el6            rhel-x86_64-server-6 263 k
 openscap              x86_64 1.0.8-1.el6_5.1        rhel-x86_64-server-6 2.9 M
 patch                 x86_64 2.6-6.el6              rhel-x86_64-server-6  91 k
 perl                  x86_64 4:5.10.1-136.el6_6.1   rhel-x86_64-server-6  10 M
 perl-Module-Pluggable x86_64 1:3.90-136.el6_6.1     rhel-x86_64-server-6  40 k
 perl-Pod-Escapes      x86_64 1:1.04-136.el6_6.1     rhel-x86_64-server-6  32 k
 perl-Pod-Simple       x86_64 1:3.13-136.el6_6.1     rhel-x86_64-server-6 212 k
 perl-libs             x86_64 4:5.10.1-136.el6_6.1   rhel-x86_64-server-6 578 k
 perl-version          x86_64 3:0.77-136.el6_6.1     rhel-x86_64-server-6  51 k
 rpm-build             x86_64 4.8.0-38.el6_6         rhel-x86_64-server-6 127 k
 rpmdevtools           noarch 7.5-2.el6              rhel-x86_64-server-6 109 k
 xz                    x86_64 4.999.9-0.5.beta.20091007git.el6
                                                     rhel-x86_64-server-6 137 k
 xz-lzma-compat        x86_64 4.999.9-0.5.beta.20091007git.el6
                                                     rhel-x86_64-server-6  16 k
Updating &lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;dependencies:
 elfutils-libelf       x86_64 0.158-3.2.el6          rhel-x86_64-server-6 182 k
 file-libs             x86_64 5.04-21.el6            rhel-x86_64-server-6 313 k
 rpm                   x86_64 4.8.0-38.el6_6         rhel-x86_64-server-6 902 k
 rpm-libs              x86_64 4.8.0-38.el6_6         rhel-x86_64-server-6 313 k
 rpm-python            x86_64 4.8.0-38.el6_6         rhel-x86_64-server-6  57 k
 xz-libs               x86_64 4.999.9-0.5.beta.20091007git.el6
                                                     rhel-x86_64-server-6  89 k

Transaction Summary
&lt;span class=&quot;o&quot;&gt;================================================================================&lt;/span&gt;
Install      24 Package&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;s&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
Upgrade       6 Package&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;s&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;

Total download size: 21 M&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Now with the tool installed we can audit our server against the DISA STIG. The
DISA STIG can be downloaded from DISA’s &lt;a href=&quot;https://nvd.nist.gov/scap/validation/128.cfm&quot;&gt;web
site&lt;/a&gt;.&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;wget http://iase.disa.mil/stigs/Documents/U_RedHat_6_V1R6_STIG_SCAP_1-1_Benchmark.zip
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;unzip U_RedHat_6_V1R6_STIG_SCAP_1-1_Benchmark.zip
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;oscap info U_RedHat_6_V1R6_STIG_SCAP_1-1_Benchmark-xccdf.xml
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;oscap xccdf &lt;span class=&quot;nb&quot;&gt;eval&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--report&lt;/span&gt; &lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;hostname&lt;/span&gt;&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nt&quot;&gt;-redhat_6_v1r6_stig&lt;/span&gt;.html &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cpe&lt;/span&gt; U_RedHat_6_V1R6_STIG_SCAP_1-1_Benchmark-cpe-dictionary.xml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  U_RedHat_6_V1R6_STIG_SCAP_1-1_Benchmark-xccdf.xml&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;The HTML report that is generated can be viewed in the browser. It summarizes
each rule with a simple pass/fail result. There are details for each rule and
remediation instructions.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/assets/uploads/2015/03/627f72d827bb-redhat_6_v1r6_stig.html&quot;&gt;&lt;img src=&quot;/assets/uploads/2015/03/Screen-Shot-2015-03-08-at-2.06.00-PM.png&quot; alt=&quot;DISA STIG Sample Report&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another useful test for RedHat systems is to verify that all of the required
patches have been installed to address the RedHat security advisories, RHSA (you
can subscribe for announcements
&lt;a href=&quot;http://www.redhat.com/mailman/listinfo/rhsa-announce&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;wget http://www.redhat.com/security/data/oval/com.redhat.rhsa-all.xml
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;oscap oval &lt;span class=&quot;nb&quot;&gt;eval&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\ &lt;/span&gt; 
  &lt;span class=&quot;nt&quot;&gt;--results&lt;/span&gt; rhsa-results-oval.xml &lt;span class=&quot;se&quot;&gt;\ &lt;/span&gt; 
  &lt;span class=&quot;nt&quot;&gt;--report&lt;/span&gt; &lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;hostname&lt;/span&gt;&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nt&quot;&gt;-rhsa-report&lt;/span&gt;.html &lt;span class=&quot;se&quot;&gt;\ &lt;/span&gt; 
  com.redhat.rhsa-all.xml  &lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Again, you can view the report in the browser to see what patches have been
applied and what patches need to be applied.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/assets/uploads/2015/03/627f72d827bb-rhsa-report1.html&quot;&gt;&lt;img src=&quot;/assets/uploads/2015/03/Screen-Shot-2015-03-08-at-2.28.12-PM.png&quot; alt=&quot;RHSA Sample Report&quot; /&gt;&lt;/a&gt;&lt;/p&gt;
</description>
        <pubDate>Sun, 08 Mar 2015 18:55:21 +0000</pubDate>
        <link>https://www.andrewkroh.com/security/2015/03/08/vulnerability-assessment-and-compliance-verification.html</link>
        <guid isPermaLink="true">https://www.andrewkroh.com/security/2015/03/08/vulnerability-assessment-and-compliance-verification.html</guid>
        
        
        <category>security</category>
        
      </item>
    
      <item>
        <title>Configuring Cisco ASA SSL Ciphers</title>
        <description>&lt;p&gt;To protect against SSL vulnerabilities it is important to disable SSLv3 and
weak ciphers on your cisco ASA device.&lt;/p&gt;

&lt;p&gt;To enumerate the ciphers supported by the device I use an openssl wrapper
script called &lt;a href=&quot;https://github.com/jvehent/cipherscan&quot;&gt;cipherscan&lt;/a&gt; that is
available on github. On a default Cisco ASA setup here is what ciphers are
available.&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;bash-4.3&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;./cipherscan sslvpn.example.com
.......
Target: sslvpn.example.com:443

prio  ciphersuite         protocols    pfs_keysize
1     RC4-SHA             SSLv3,TLSv1
2     DHE-RSA-AES128-SHA  TLSv1        DH,1024bits
3     DHE-RSA-AES256-SHA  TLSv1        DH,1024bits
4     AES128-SHA          SSLv3,TLSv1
5     AES256-SHA          SSLv3,TLSv1
6     DES-CBC3-SHA        SSLv3,TLSv1

Certificate: UNTRUSTED, 2048 bit, sha256WithRSAEncryption signature
TLS ticket lifetime hint: None
OCSP stapling: not supported
Server side cipher ordering&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;To change the supported protocols and ciphers, login to the Cisco ASA via SSH.
You can list the current SSL configuration with &lt;code class=&quot;highlighter-rouge&quot;&gt;show ssl&lt;/code&gt; and then make the required changes.&lt;/p&gt;

&lt;p&gt;You should disable SSLv3 due to the POODLE vulnerability. And you should verify
that you are using strong ciphers. I prefer to use ciphers that support PFS, but
the Cisco AnyConnect IOS app for the SSL VPN
&lt;a href=&quot;http://www.cisco.com/c/en/us/td/docs/security/vpn_client/anyconnect/anyconnect30/administration/guide/anyconnectadmin30/acmobiledevices.html#pgfId-1051726&quot;&gt;does not support&lt;/a&gt;
the PFS ciphers so I had to include aes256-sha1 and aes128-sha1.&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;asa5505&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;config&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;c&quot;&gt;# ssl client-version tlsv1-only  &lt;/span&gt;
asa5505&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;config&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;c&quot;&gt;# ssl server-version tlsv1  &lt;/span&gt;
asa5505&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;config&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;c&quot;&gt;# ssl encryption dhe-aes256-sha1 dhe-aes128-sha1 aes256-sha1 aes128-sha1&lt;/span&gt;
asa5505# show ssl  
Accept connections using SSLv2 or greater and negotiate to TLSv1  
Start connections using TLSv1 only and negotiate to TLSv1 only  
Enabled cipher order: dhe-aes256-sha1 dhe-aes128-sha1 aes256-sha1 aes128-sha1  &lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;And finally verify the changes using cipherscan.&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;bash-4.3&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;./cipherscan sslvpn.example.com  
...  
Target: sslvpn.example.com:443

prio  ciphersuite         protocols  pfs_keysize  
1     DHE-RSA-AES256-SHA  TLSv1      DH,1024bits  
2     DHE-RSA-AES128-SHA  TLSv1      DH,1024bits  
3     AES256-SHA          TLSv1  
4     AES128-SHA          TLSv1

Certificate: UNTRUSTED, 2048 bit, sha256WithRSAEncryption signature  
TLS ticket lifetime hint: None  
OCSP stapling: not supported  
Server side cipher ordering  &lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

</description>
        <pubDate>Fri, 06 Mar 2015 16:49:28 +0000</pubDate>
        <link>https://www.andrewkroh.com/security/2015/03/06/configuring-cisco-asa-ssl-ciphers.html</link>
        <guid isPermaLink="true">https://www.andrewkroh.com/security/2015/03/06/configuring-cisco-asa-ssl-ciphers.html</guid>
        
        
        <category>security</category>
        
      </item>
    
      <item>
        <title>Creating a Site-to-Site VPN with Solaris 11</title>
        <description>&lt;p&gt;I documented the process I used to create a site-to-site VPN between two sites
using Solaris 11 as the router. I did this because the
&lt;a href=&quot;http://docs.oracle.com/cd/E23824_01/html/821-1453/ipsec-mgtasks-vpn-2.html&quot;&gt;documentation&lt;/a&gt;
provided by Oracle has several critical flaws.&lt;/p&gt;

&lt;p&gt;Here's the direct link to the document that's embedded below:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;http://docs.oracle.com/cd/E23824_01/html/821-1453/ipsec-mgtasks-vpn-2.html&quot;&gt;Creating a Site-to-Site VPN with Solaris
11&lt;/a&gt;&lt;/p&gt;

&lt;iframe style=&quot;width: 100%; height: 75em;&quot; src=&quot;https://docs.google.com/document/d/1bi0vBoUYLVr2-mkANeFq5rjejxvrpGkIxPhAtgg8pys/pub?embedded=true&quot;&gt;&lt;/iframe&gt;
</description>
        <pubDate>Sun, 01 Feb 2015 17:27:29 +0000</pubDate>
        <link>https://www.andrewkroh.com/solaris/security/2015/02/01/creating-a-site-to-site-vpn-with-solaris-11.html</link>
        <guid isPermaLink="true">https://www.andrewkroh.com/solaris/security/2015/02/01/creating-a-site-to-site-vpn-with-solaris-11.html</guid>
        
        <category>solaris11</category>
        
        
        <category>solaris</category>
        
        <category>security</category>
        
      </item>
    
      <item>
        <title>Migrating a CVS Repository to Git</title>
        <description>&lt;p&gt;So you want to take those old CVS repositories and migrate them to Git?&lt;/p&gt;

&lt;p&gt;You can use &lt;a href=&quot;http://cvs2svn.tigris.org/cvs2git.html&quot;&gt;cvs2git&lt;/a&gt; to migrate your
CVS repository with history to Git. It requires direct filesystem access the CVS
repository that you wish to convert.&lt;/p&gt;

&lt;p&gt;Fist download cvs2git, compile, and install.&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;  
svn co &lt;span class=&quot;nt&quot;&gt;--username&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;guest &lt;span class=&quot;nt&quot;&gt;--password&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&quot;&lt;/span&gt; http://cvs2svn.tigris.org/svn/cvs2svn/trunk cvs2svn-trunk  
&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;cvs2svn-trunk  
make &lt;span class=&quot;nb&quot;&gt;install&lt;/span&gt;  &lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Now you are ready to convert a repository's history into git fast-forward format.&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;  
&lt;span class=&quot;nb&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; /tmp/convert/cvs2git-tmp  
&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; /tmp/convert  
cvs2git &lt;span class=&quot;nt&quot;&gt;--blobfile&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;cvs2git-tmp/git-blob.dat &lt;span class=&quot;nt&quot;&gt;--dumpfile&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;cvs2git-tmp/git-dump.dat &lt;span class=&quot;nt&quot;&gt;--username&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;cvs2git /path/to/cvs/repo/component  &lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;cvs2git only migrates history so you need to add all the files to Git yourself.&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;  
cvs co component  
&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;component  
find &lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-type&lt;/span&gt; d &lt;span class=&quot;nt&quot;&gt;-name&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'CVS'&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-exec&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-rf&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{}&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\;&lt;/span&gt;  
git init  
git add &lt;span class=&quot;nt&quot;&gt;--all&lt;/span&gt;  &lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Now import the history and push it all to the origin.&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;  
&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; ../cvs2git-tmp/git-blob.dat ../cvs2git-tmp/git-dump.dat | git fast-import  
git remote add origin https://github.com/name/component.git  
git push origin master  &lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

</description>
        <pubDate>Thu, 26 Dec 2013 17:08:51 +0000</pubDate>
        <link>https://www.andrewkroh.com/uncategorized/2013/12/26/migrating-a-cvs-repository-to-git.html</link>
        <guid isPermaLink="true">https://www.andrewkroh.com/uncategorized/2013/12/26/migrating-a-cvs-repository-to-git.html</guid>
        
        <category>development</category>
        
        
        <category>uncategorized</category>
        
      </item>
    
      <item>
        <title>DDNS Updater for DNS Made Easy</title>
        <description>&lt;p&gt;The problem with hosting a domain on a dynamic IP is that when your IP address
changes your domain becomes inaccessible until you update the DNS record with
your new IP. Hosting a domain on a dynamic IP address can be done easily if you
use DDNS (Dynamic Domain Name Service) and can afford a few minutes of downtime.
DNS records can be automatically updated via DDNS when the server's IP address
changes.&lt;/p&gt;

&lt;p&gt;DNS Made Easy has been handling DNS for my domains since 2004. I have written a
tool in Java for performing secure DDNS updates to their servers when your IP
address changes. It works by making a HTTP request to a server on the public
internet that echos back the requester's IP address as seen by the server; this
is what allows the tool to work behind NAT. Then it compares this IP address to
the address returned during the last call, and only if they are different will
it make a secure request to update the DDNS record.&lt;/p&gt;

&lt;p&gt;This tool was written to be run periodically by some form of scheduler: crontab
or Windows scheduled tasks.&lt;/p&gt;

&lt;p&gt;For all the details head on over to GitHub and see
&lt;a href=&quot;https://github.com/andrewkroh/dns-made-easy-updater&quot;&gt;dns-made-easy-updater&lt;/a&gt;.&lt;/p&gt;
</description>
        <pubDate>Sat, 09 Jul 2011 16:00:03 +0000</pubDate>
        <link>https://www.andrewkroh.com/uncategorized/2011/07/09/ddns-updater-for-dns-made-easy.html</link>
        <guid isPermaLink="true">https://www.andrewkroh.com/uncategorized/2011/07/09/ddns-updater-for-dns-made-easy.html</guid>
        
        
        <category>uncategorized</category>
        
      </item>
    
  </channel>
</rss>
