<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[std::bad_alloc]]></title><description><![CDATA[Thoughts, and experiments of a computer scientist and engineer.]]></description><link>http://tylerspringer.com/</link><generator>Ghost 0.9</generator><lastBuildDate>Fri, 13 Mar 2026 01:46:20 GMT</lastBuildDate><atom:link href="http://tylerspringer.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Finding Reliable Communication Backbones in a Wireless Sensor Network]]></title><description><![CDATA[<p>Nearing the end of my master's degree I had the pleasure of taking Algorithms Engineering with Dr. David Matula, who has done a significant amount of work in the graph theory space. For my end of semester project I created a program and corresponding visualization that was targeted at finding</p>]]></description><link>http://tylerspringer.com/finding-reliable-communication-backbones-in-a-wireless-sensor-network/</link><guid isPermaLink="false">778e1735-3071-43e2-af75-de2fa2ff9fec</guid><dc:creator><![CDATA[M. Tyler Springer]]></dc:creator><pubDate>Sun, 30 Oct 2016 23:36:52 GMT</pubDate><content:encoded><![CDATA[<p>Nearing the end of my master's degree I had the pleasure of taking Algorithms Engineering with Dr. David Matula, who has done a significant amount of work in the graph theory space. For my end of semester project I created a program and corresponding visualization that was targeted at finding communication backbones in wireless sensor networks.</p>

<p>As it turns out random geometric graphs (RGG) are extremely good at modeling wireless sensor networks. This is because an RGG's edges are determined by a vertex's proximity to another vertex. These two vertices have an edge between them if they are less than some distance r from each other. In this case r is an analog to the broadcast radius of a wireless sensor. In a real world wireless sensor network, sensors may be dropped from a plane or scattered on the ground in some unpredictable fashion. It is useful to be able to create a communication backbone between these sensors so that you can extract data from all the sensors from only one (or very few) egress sensors. Imagine that it would be much nicer to walk to and download information from one sensor than to gather information individually from all sensors that are scattered throughout some arbitrarily large geographic area. Using "smallest last ordering" and graph coloring algorithms were able to reliably choose pretty good backbones, using a program and any random geometric graph as input. This project was extremely in depth so I recommend that you click here to explore the project further and read the whole project description as well as check out the cool visualizations I made to display the work that has been done. </p>

<p>This project has to be one of the coolest that I worked on at University. The task was centered around reliably finding high quality communication backbones in a wireless sensor network. This is becoming increasingly relevant as IoT devices are becoming cheaper and more ubiquitous by the day. Consider an example where thousands of wireless sensors are thrown out at random across a planet's surface. It would be very costly and time consuming to travel to each individual sensor and extract its stored information. Instead it would be vastly superior if it were possible to extract information from the entire network from only one single egress point. To do this we need to establish communication backbones that will identify routes for this information to flow out of the network.</p>

<p>To simulate the behavior of such a network, we used random geometric graphs (RGGs). RGGs have the property that a vertex is connected to another vertex if it is within some distance <em>R</em> of each other, where <em>R</em> is a configurable distance and is a very apt analog to broadcast range on a wireless sensor. Then using a pretty massive series of graph algorithms to color, order and select nodes, we generate some viable backbones. To give you an idea of how this works, we first perform a "Smallest Last Ordering" of the nodes in the graph and then color them based on this ordering. This ordering and then subsequent coloring allows us to identify maximum independent sets of nodes with a bias towards high degree nodes. This means that colors generated early (color 0 in terms of the Grundy coloring algorithm) will generally have many many nodes in it with extremely high set coverage. We can then combine any two of these independent sets and pick the ones with the highest overall set coverage (and potentially fewest number of vertices). This is best demonstrated by example. Please watch the following video visualizing the results on a unit sphere.</p>

<iframe src="https://player.vimeo.com/video/190751203" width="640" height="647" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen></iframe>  

<p><a href="https://vimeo.com/190751203">sphereVisualizationDemo</a> from <a href="https://vimeo.com/user55235050">Tyler Springer</a> on <a href="https://vimeo.com">Vimeo</a>.</p>

<p>This project has two major components:</p>

<ul>
<li>The Visualization Component</li>
<li>The Computation Component</li>
</ul>

<p>Visualization was done entirely using the <a href="https://processing.org/">Processing language</a> by reading in data that was pre-generated by the Computation Component. The visualization component is demoed above, but only results for the sphere is shown. In addition to a sphere, I also tested graphs with points thrown onto a unit square and also a unit disk. Here are some sample images of those in action:</p>

<p><img src="http://tylerspringer.com/images/UnitCircleWSN.png" alt="RGG simulating a wireless sensor network on a unit disk"></p>

<p>And here is <em>a</em> communication backbone generated for this graph (it has 99.95% set coverage).</p>

<p><img src="http://tylerspringer.com/images/UnitCircleBackboneWSN.png" alt="Communication backbone for WSN on a unit square"></p>

<p>And here's a graph on a unit square just because it looks pretty:</p>

<p><img src="http://tylerspringer.com/images/UnitSquareWSN.png" alt="RGG simulating a wireless sensor network on a unit square"></p>

<p>The computation component is responsible for actually creating all this data. Is is a python/C program (using Cython to expedite the process of writing a C extension) that creates a number of CSV files that are then visualized by the Processing sketches shown above. This program is a command line utility that I created to nicely and quickly generate all the required data for a simulation. Input parameters include the number of nodes, the type of projection (square, disk, sphere), the evaluation method, the average degree of nodes within the graph and the location where you'd like the files written to.</p>

<p>One other interesting feature of this program is that there are a number of optimizations to help reduce computational complexity (this really helps when graphs become large). For example, when generating graphs we must compare the distance of each node to every other node in order to determine whether or not they are less than or equal to a distance <em>R</em> from each other and thereby have a connecting edge which given <em>n</em> nodes will take <em>n^2</em> comparisons to evaluate, or <em>O(n^2)</em>. That's not a great running class to be in, so instead it make a lot more sense to optimize this process. Given the observation that nodes that are greater than a distance <em>R</em> from each other cannot be connected, it doesn't make a lot of sense for nodes on opposite sides of the graph to be compared as its physically impossible that they are connected. So instead it is much better if we divide the graph up into a grid where each cell is <em>R</em>x<em>R</em> in size. We then compare the nodes only within that cell to each other eliminating silly comparisons that will objectively never result in an edge between vertices. This process runs in <em>O(n^2 r^2)</em> where <em>R</em> is less than 1. This is only one example of optimizations that makes this a tractable problem and one that can actually be implemented in the real world.</p>

<p>The post below is the actual document I submitted for my project and is chock full of more details about this whole process. Please take a moment to read through it as you might learn something interesting!</p>

<hr>

<iframe src="http://tylerspringer.com/ViewerJS/WSNProject_Springer_Complete.pdf" width="100%" height="600" allowfullscreen webkitallowfullscreen=""></iframe>]]></content:encoded></item><item><title><![CDATA[Infinity Table v1.0 Project Decription and Gallery]]></title><description><![CDATA[<p>An infinity mirror is created by sandwiching some lights in between the reflective side of a one-way-mirror and the reflective side of a traditional mirror. The two mirrored surfaces reflect the light back and forth between one another creating the appearance of infinite depth. By using a one-way-mirror, some light</p>]]></description><link>http://tylerspringer.com/infinity-table-v1-0-gallery/</link><guid isPermaLink="false">4b003a2b-10ac-4501-903e-48dffaacb8ad</guid><dc:creator><![CDATA[M. Tyler Springer]]></dc:creator><pubDate>Mon, 08 Aug 2016 03:41:24 GMT</pubDate><content:encoded><![CDATA[<p>An infinity mirror is created by sandwiching some lights in between the reflective side of a one-way-mirror and the reflective side of a traditional mirror. The two mirrored surfaces reflect the light back and forth between one another creating the appearance of infinite depth. By using a one-way-mirror, some light is allowed to escape enabling a spectator to view the effect. Because some of the light is allowed to escape and some is also absorbed by the non-reflective surfaces between the sandwich, the effect does not persist into infinity. For this reason it is extremely important to use a very high quality one-way-mirror (as opposed to say, car tint on a piece of clear glass) as it will make the illusion depth appear to be much greater. It is also important to use the brightest LEDs you can find, because the one-way-mirror substantially cuts down on the amount of light allowed to escape, dimming the brightness of LEDs as if you were wearing sunglasses.</p>

<p>This project was a joint effort between my Dad and myself (and a few errant friends toward the end) as there was a lot that had to be done. The wood is actually from an old bookshelf that my Mom and Dad bought for my apartment during school. It was $10 and made from solid oak, so the lumber alone made that shelf a good investment. The shelf was stained and had to be completely resurfaced before we could cut it down, and stick it all back together as a coffee table. Everything uses pocket screw so that there are no exposed screw heads and the table has 6 layer of polyurethane coating the surface (4 regular fast dry poly and 2 spar to protect the wood from UV and moisture).</p>

<p>The LEDs are APA102 based and were purchased from Pololu electronics. These particular LEDs are the highest-density APA102s that I am aware of on the market at 144 LEDs per meter. However they are only sold in half-meter strips due to voltage drop across the strip (yes, they are super super bright), which means you have to put a bunch of strips together and then power each one individually to prevent the color from becoming increasingly red. At the moment, the LEDs are driven by an Arduino UNO via standard SPI, though I plan to swap that out for a Raspberry Pi any day now. The Pi will let me do really cool things like slice up images and use them as a bit stream of colors to the LEDs (if that makes no sense watch this video to get an idea of what I mean).</p>

<p><strong>Here is quick view of the finished product:</strong>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/infinity_table_complete.jpg" alt="Finished Product"></p>

<p><strong>Here are some videos of the table in action:</strong></p>

<iframe src="https://player.vimeo.com/video/177969723?color=ffffff" width="640" height="1138" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen></iframe>  

<p><a href="https://vimeo.com/177969723">Infinity Table Running In the Living Room</a> from <a href="https://vimeo.com/user55235050">Tyler Springer</a> on <a href="https://vimeo.com">Vimeo</a>.</p>

<iframe src="https://player.vimeo.com/video/177969805?color=ffffff" width="640" height="1138" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen></iframe>  

<p><a href="https://vimeo.com/177969805">Infinity Table Running Before Urethane</a> from <a href="https://vimeo.com/user55235050">Tyler Springer</a> on <a href="https://vimeo.com">Vimeo</a>.</p>

<iframe src="https://player.vimeo.com/video/177970003?color=ffffff" width="640" height="1138" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen></iframe>  

<p><a href="https://vimeo.com/177970003">First look at the LEDs and Mirror</a> from <a href="https://vimeo.com/user55235050">Tyler Springer</a> on <a href="https://vimeo.com">Vimeo</a>.</p>

<iframe src="https://player.vimeo.com/video/177970045?color=ffffff" width="640" height="1138" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen></iframe>  

<p><a href="https://vimeo.com/177970045">Quick test fit of the LEDs and the mirrors</a> from <a href="https://vimeo.com/user55235050">Tyler Springer</a> on <a href="https://vimeo.com">Vimeo</a>.</p>

<iframe src="https://player.vimeo.com/video/177970060?color=ffffff" width="640" height="1138" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen></iframe>  

<p><a href="https://vimeo.com/177970060">Playing with the spacing on the LEDs</a> from <a href="https://vimeo.com/user55235050">Tyler Springer</a> on <a href="https://vimeo.com">Vimeo</a>.</p>

<p><strong>Annnnnd here is the table being built over the course of probably about 6 months, as we had time:</strong></p>

<p><img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_3865.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_3867.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_3869.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_3870.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_3871.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_3873.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_3875.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_3910.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_3911.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_3912.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_3914.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_3915.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_3916.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_3917.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_3919.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_3920.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_3922.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_3925.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_3926.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4081.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4114.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4116.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4120.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4121.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4128.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4129.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4130.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4131.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4132.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4133.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4134.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4135.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4136.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4137.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4138.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4139.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4140.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4141.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4142.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4143.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4144.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4145.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4146.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4147.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4148.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4149.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4150.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4841.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4842.jpg" alt="Infinity Table in Production"> <br>
<img src="http://tylerspringer.com/images/N000000-0.png" data-src="/images/IMG_4843.jpg" alt="Infinity Table in Production"></p>]]></content:encoded></item><item><title><![CDATA[Destroying Sensitive Information Stored in AWS with GNU Shred and Python]]></title><description><![CDATA[<h3 id="volumesecurityinaws">Volume Security in AWS</h3>

<p>AWS is a tremendous resource that makes standing up and shutting down complex infrastructure easy. At work and at home I use AWS on an almost daily basis. I like that I can provision and terminate resources without giving a second thought to the location or</p>]]></description><link>http://tylerspringer.com/destroying-sensitive-information-stored-in-aws/</link><guid isPermaLink="false">50e47606-c1df-45b0-8be9-5e9865b3d96d</guid><dc:creator><![CDATA[M. Tyler Springer]]></dc:creator><pubDate>Mon, 01 Aug 2016 18:27:53 GMT</pubDate><content:encoded><![CDATA[<h3 id="volumesecurityinaws">Volume Security in AWS</h3>

<p>AWS is a tremendous resource that makes standing up and shutting down complex infrastructure easy. At work and at home I use AWS on an almost daily basis. I like that I can provision and terminate resources without giving a second thought to the location or content of the physical machines, and that I can do so inside of 60 seconds for just about any action. </p>

<p>In the past I've never had to consider what happens to something like an EBS volume when its returned to the Amazon resource pool, because frankly who cares? It doesn't bother me that someone at Amazon might be able to see the content of my volume because at the end of the day all they are going to find is some code. But what happens if our volumes <em>do</em> in fact contain confidential information or even worse, confidential information that is worth something to someone like a hacker?</p>

<p>In those instances it is reasonable to protect the content of the volumes by encrypting them (a feature that Amazon offers free of charge). In which case you can simply destroy the keys encrypting the drive and rest easy that no one, at Amazon, or anywhere else is going to be able to snoop on the contents of the volume once its returned to the resource pool.</p>

<p>However, in the real world, things have a tendency to be a little more screwy. Consider that someone other than yourself could have spun up an environment at your company that was never intended to contain any kind of sensitive/confidential information. In this case it is completely reasonable to have setup unencrypted volumes, if for no reason other than to have one less key to manage. Now let's say that at some point, someone places customer data on those volumes for demo purposes. This shouldn't have happened, but let's say it did anyway. Some questions you might then ask are:</p>

<ul>
<li>What happens to the data when it is released to Amazon? </li>
<li>Is your data secure? </li>
<li>Do you have to worry that your data might be persisted and another customer down the line might have access to it?</li>
</ul>

<p>As it turns out the answers to these questions can be a little tough to come by. After quite a bit of searching, I found <a href="https://forums.aws.amazon.com/thread.jspa?threadID=101237">this AWS developer forum thread</a> detailing different compliance ratings that Amazon holds and how it deals with your data during its tenancy in AWS. Most of the information in this thread seems to indicate that data is wiped according to DoD standards and that you have absolutely nothing to worry about when releasing volumes containing sensitive information. <strong>However, this is misleading and is not necessarily the case.</strong> If you read the very final post in the thread you will see that page 21 of the AWS security whitepaper reads:</p>

<blockquote>
  <p>"Amazon EBS volumes are presented to you as raw unformatted block devices that have been wiped prior to being made available for use. Wiping occurs immediately before reuse so that you can be assured that the wipe process completed. If you have procedures requiring that all data be wiped via a specific method, such as those detailed in DoD 5220.22-M ("National Industrial Security Program Operating Manual") or NIST 800-88 ("Guidelines for Media Sanitization"), you have the ability to do so on Amazon EBS. You should conduct a specialized wipe procedure prior to deleting the volume for compliance with your established requirements."</p>
</blockquote>

<p><strong>This indicates that the responsibility of deleting data on EBS volumes to DoD or NIST standards is in fact the customer's responsibility and is not inherently performed by AWS.</strong> This statement is also mirrored in AWS' <a href="https://aws.amazon.com/compliance/shared-responsibility-model/">shared responsibility model documentation</a> which indicates that customers are responsible for security <em>in</em> the cloud and AWS is responsible for security <em>of</em> the cloud. So, if you don't ensure drives are properly wiped before releasing them back into the AWS resource pool the data <em>is</em> still hanging around within the cloud and is not deleted until the space is needed for another customer. This makes your sensitive data susceptible to exfiltration by Amazon contractors or employees (and although I'm sure they're all honest, you never can be too sure).</p>

<p>So in conclusion, we <strong>must</strong> securely wipe all data from EBS volumes <strong>before</strong> releasing them back to Amazon if we want to ensure that our potentially sensitive data doesn't fall into the wrong hands.</p>

<h3 id="introducingthegnushredutility">Introducing The GNU Shred Utility</h3>

<p>The GNU <code>shred</code> utility makes it possible to permanently erase all data on a persistent memory device like a hard drive or solid state drive. This is achieved by overwriting the contents of the volume with random data in several passes and then finally zeroing out everything in a final pass to produce a raw, unformatted, volume free of any sensitive/confidential information. <code>shred</code> is part of <code>coreutils</code> so finding quality documentation should be as easy a <code>man shred</code>. However, there are a number of good <a href="http://fsckin.com/2008/01/09/using-shred-to-wipe-hard-drives-dod-uses-it-you-should-too/">blog posts</a> about how to use <code>shred</code> and how it works under the hood if you are inclined to find more information. Let's see how we can use <code>shred</code> to wipe an EBS volume so that we can safely release it back to the AWS resource pool.</p>

<h3 id="enterpythonboto">Enter python + boto</h3>

<p>In order to use <code>shred</code> we have to take care of a few things first. First and foremost it is not possible to reliably destroy an EBS volume that is currently attached to an EC2 instance. Instead when we stop/terminate the EC2 instance we must ensure that the EBS volume attached to it is <em>not</em> destroyed by default (otherwise the EBS volume is released to AWS before it has been shredded). Once the EC2 instance is stopped/terminated the EBS volume can/will be unattached and is ready to be shredded.</p>

<p>Because we can not shred a drive that we are booting from we will need another auxiliary EC2 instance to actually <em>perform</em> the shred. <strong>It is also mandatory that the new EC2 instance is setup within the same availability zone as the EBS volume to be shredded</strong> (this way Amazon avoids shipping data around and using up its internal bandwidth).</p>

<p>Once we have configured and launched a new EC2 instance we are ready to start shredding some drives. To establish some basic understanding of the process lets look at how we can perform this task manually, then we will automate the process using python and boto which is much more convenient for shredding multiple EBS volumes. To get started shredding we will need to:</p>

<ul>
<li>Attach the EBS volume we want to erase to the new EC2 instance we have setup for shredding</li>
<li>Locate that attached volume in <code>/dev/</code>, but do not mount it</li>
<li>Run <code>shred</code> on the attached EBS volume</li>
<li>Detach the drive from EC2 instance</li>
<li>Release the EBS volume back to Amazon's resource pool</li>
</ul>

<p>So, let's go ahead and attach the EBS volume we want to shred to the new EC2 instance we've setup. First, go to the AWS management console and copy down the "instance id" field (it should have the form "i-abcd1234"). Next select "volumes" from the "Elastic Block Store" section of the navigation menu on the left side of the window and locate the volume you intend to shred. Right click the volume and select "Attach volume." A dialog box will appear, prompting you to enter the instance id of the machine you want to attach the drive to - enter the instance id that you copied down earlier. This prompt will also ask you for "device." This field is the name you want the drive to have when it appears in <code>/dev/</code>. You can use whatever you want, but I normally use <code>/dev/sdx</code>. This particular device name will be converted to <code>/dev/xvdx</code> on the actual EC2 instance (I couldn't tell you why). So now that the drive is attached, ssh into the newly created EC2 instance and run <code>ls -l /dev/</code> and confirm that <code>xvdx</code> or whatever you named your device is present in the list. To make your life a bit easier you can run <code>ls -l /dev/ | grep xvdx</code> which will search specifically for <code>xvdx</code> instead of having to manually look through the list.</p>

<p>Once you have confirmed that the device is attached, we can go ahead and start shredding. <em>You do not need to mount the drive to shred it.</em> To shred the drive to DoD standards simply run the command:</p>

<p><code>sudo shred /dev/xvdx -f -v -z</code></p>

<p>This will shred the device <code>/dev/xvdx</code> with three passes (default) of random data plus one final pass of zeros to finish it out. 3 is the standard number of iterations that shred will perform, but you  can specify any number you want using the <code>-n</code> flag. So <code>sudo shred /dev/xvdx -n 7</code> would write 7 passes of random data to the drive. </p>]]></content:encoded></item></channel></rss>