Jekyll2022-11-08T14:08:11+00:00/feed.xmlJason’s BlogJust a programmer writing about code things.Building Ben Eater’s 6502 Kit2021-07-15T00:24:00+00:002021-07-15T00:24:00+00:00/2021/07/15/ben-eater-6502-kit-build<p>I recently discovered Ben Eater’s excellent <a href="https://eater.net/6502">video
series</a> on building a 6502-based computer on a
breadboard. The <a href="https://en.wikipedia.org/wiki/MOS_Technology_6502">6502</a> is a
pretty famous microprocessor and variants of it were used in the Nintendo, some
Ataris, the Apple II, and the Commodore 64 (among others).</p>
<p>I’ve done a bit of hardware stuff before, but mostly at the level of Raspberry
Pis, so this was definitely a step up in terms of difficulty for me. Undaunted,
I started with the Clock Module:</p>
<p><img src="/assets/clock-module-1.jpg" alt="starting the clock module" /></p>
<p>I was able to get it working without too much trouble and definitely learned
how to neatly wire up a breadboard (also the value of doing it neatly). Here’s
the final verion:</p>
<p><img src="/assets/clock-module-finished.jpg" alt="finished clock module" /></p>
<p>It’s amazing how satisfying just getting an LED to blink at different speeds
is! And, yes, that is me using a caliper to measure out the wires. The star of
this circuit is definitely the 555 timer, which I took a detour to study for a
bit in an attempt to understand it a bit better. I ordered a <a href="https://shop.evilmadscientist.com/652">kit from
EvilMadScientist.com</a> that I’m excited
about but I haven’t assembled it yet! Now it’s time to get the 6502 installed!
You can see it here installed on a second breadboard attached to the clock
module:</p>
<p><img src="/assets/6502-1.jpg" alt="installing the 6502" /></p>
<p>One really cool thing that Ben did in his video series was to use an Arduino
to read the contents of the address and data busses so that you’re able to see,
in real time, what’s on the bus! Here’s what that looks like all hooked up:</p>
<p><img src="/assets/6502-2.jpg" alt="instrumenting the 6502 with an Arduino" /></p>
<p>After verifying that things are working at this level, the next step is to hook
up an <a href="https://en.wikipedia.org/wiki/EEPROM">EEPROM</a> to store a program for the
6502 to run. There’s also an <a href="https://en.wikipedia.org/wiki/WDC_65C22">interface
chip</a> that I installed before taking
another photo. You can see the output of that chip on the fancy LEDs above the
LCD screen. At this point, I have a rudimentary program running to communicate
with the LCD to display “Hello, world!”</p>
<p><img src="/assets/6502-3.jpg" alt="Hello World on the 6502" /></p>
<p>Great success! Now, we’ll install a RAM chip so that we can have a stack, which
will enable us to call subroutines. This required moving some chips around to
keep things clean, so I had to do a good bit of re-wiring. Here’s the boards before
that work started:</p>
<p><img src="/assets/6502-4.jpg" alt="Two steps forward, one step back" /></p>
<p>I also had the bright idea to label the pins on the ICs so that I wouldn’t have
to constant refer back to their datasheets. That was a huge improvement in both
efficiency and accuracy. Here’s the state of the board with all the labels and
half of the final wiring done:</p>
<p><img src="/assets/6502-5.jpg" alt="Labels done, progress on the wiring" /></p>
<p>After getting all of the wiring done, I had it work again but now with a stack!
The final step is to remove the clock module, which runs pretty slowly, and
install a real crystal oscillator running at a blistering 1MHz. Unfortunately,
I ran into issues running a full speed where I’d only see the last two or three
characters of the string on the LCD. After a good deal of debugging, I
discovered that the voltage on the output pins between the LEDs and the LCD was
really low, so I removed the LEDs and everything worked again! My hypothesis is
that the resistors were restricting the voltage too much. With the final problem
solved, we have a working 6502-based computer runing at full speed!</p>
<p><img src="/assets/6502-finished.jpg" alt="final build" /></p>
<p>Next, I’m going to give <a href="https://eater.net/8bit/">Ben’s 8-bit computer</a> build a try.</p>I recently discovered Ben Eater’s excellent video series on building a 6502-based computer on a breadboard. The 6502 is a pretty famous microprocessor and variants of it were used in the Nintendo, some Ataris, the Apple II, and the Commodore 64 (among others).Restoring a Sun Microsystems Ultra 52021-06-18T21:22:00+00:002021-06-18T21:22:00+00:00/2021/06/18/ultra-5-restoration<p>My first “computer job” was working as the assistant sysadmin for the Math
Department at the University of South Carolina. I can still remember how proud
I was to have my own office and a cutting-edge Sun Microsystems Ultra 5
workstation with the matching, purple, 19” CRT. It had a 360MHz UltraSPARC-IIi
processor and ran Solaris 9 with CDE as the windowing system. I learned vim,
Perl, and lots more about UNIX on that computer.</p>
<p>Because I have so many good feelings attached to that hardware, I wanted to
have one of my own again. So, after searching Ebay for a couple of weeks and
vacillating on whether or not I should actually purchase it, I decided on one
that even came with the keyboard I used.</p>
<p><img src="/assets/sun_ultra_5_booted.jpg" alt="The box, along with some foreshadowing" /></p>
<p>Thanks to some speedy shipping, I received it about a week later and in great
condition. I was immediately reminded just how heavy and well-constructed these
machines are! Typing on the keyboard definitely brought back some memories as
well. When I booted it up, though, I was presented with a nice error message:</p>
<p><img src="/assets/sun_boot_failure.jpg" alt="The IDPROM contents are invalid" /></p>
<p>This was expected, though. Unfortunately, the battery is <em>inside</em> of the NVRAM
chip and not replaceable. Some more adventurous folks have used a Dremel to get
access to the battery in order to solder leads on it to power the chip.
Fortunately, <a href="http://cholla.mmto.org/computers/sun/ultra/nvram.html">this blog post</a> describes how to manually set the MAC address
so that it’ll boot. For my machine, this is what that looks like:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>set-defaults
1 0 mkp
80 1 mkp
8 2 mkp
0 3 mkp
20 4 mkp
b9 5 mkp
4e 6 mkp
5a 7 mkp
0 8 mkp
0 9 mkp
0 a mkp
0 b mkp
b9 c mkp
4e d mkp
5a e mkp
0 f 0 do i idprom@ xor loop f mkp
</code></pre></div></div>
<p>With that typing out of the way, I was able to run the <code class="language-plaintext highlighter-rouge">banner</code> command, see
that that the MAC address is set correctly, and run <code class="language-plaintext highlighter-rouge">reset</code> to reboot. At this
point, I was able to get to the login screen. Huzzah!</p>
<p>Unfortunately, I wasn’t given any of the passwords for the box (the seller also
didn’t know them), so I was off to searching for a way to reset the root
password. I tried reseting it via OpenBoot, with no luck. I found a way to
reset it by booting to the Solaris 9 install CD but I don’t have one of those
so that was out. However, I had another idea – take out the hard drive, mount
the filesystem, and blank out the root paswword.</p>
<p><img src="/assets/sun_ultra_5_inside.jpg" alt="A look at the inside sans hard drive" /></p>
<p>The first step was to order an <a href="https://smile.amazon.com/gp/product/B014PEP3DU">IDE to USB
adapter</a>. Once that arrived, I
excitedly but carefully removed the hard drive from the Sun box and attached it
to my iMac. Unfortunately, the Mac didn’t recognize the drive at all. Next, I
tried attaching it to my NUC running NixOS and it could read it! Here’s what
<code class="language-plaintext highlighter-rouge">fdisk -l</code> returned:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Disk /dev/sdb: 8.03 GiB, 8622931968 bytes, 16841664 sectors
Disk model:
Geometry: 16 heads, 63 sectors/track, 16706 cylinders
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: sun
Device Start End Sectors Size Id Type Flags
/dev/sdb1 1049328 1336607 287280 140.3M 2 SunOS root
/dev/sdb2 1336608 1933343 596736 291.4M 4 SunOS usr
/dev/sdb3 0 16839647 16839648 8G 5 Whole disk
/dev/sdb4 1933344 2060351 127008 62M 7 SunOS var
/dev/sdb5 0 1049327 1049328 512.4M 3 SunOS swap u
/dev/sdb6 2060352 2112767 52416 25.6M 0 Unassigned
/dev/sdb7 2112768 4775903 2663136 1.3G 4 SunOS usr
/dev/sdb8 4775904 16839647 12063744 5.8G 8 SunOS home
</code></pre></div></div>
<p>Now we’re making progress! However, when I attempted to mount the root
partition, I got an error because there isn’t a <code class="language-plaintext highlighter-rouge">/dev/sdb1</code>. Womp, womp.
Apparently this happens when the kernel can’t recognize the partition table,
but I was able to manually add the partition with <code class="language-plaintext highlighter-rouge">addpart /dev/sdb 1 1049328 287280</code>
and then mount the volume (read-only) with <code class="language-plaintext highlighter-rouge">mount /dev/sdb1 /mnt/sun</code>. From
there, I was able to grab the contents of the <code class="language-plaintext highlighter-rouge">/etc/shadow</code> file that contains
the root password. Unfortunately, I still had two problems:</p>
<ol>
<li>the filesystem is mounted read-only, so I can’t just wipe out the root password</li>
<li>the password is encrypted</li>
</ol>
<p>Considering the fact that I don’t have a way to repair or re-install the
operating system, I was very hesistant to try to mount the filesystem as
read/write. Instead, I decided to give the password cracker, <a href="https://www.openwall.com/john/">John the Ripper</a>
a try. To my surprise, it worked fantastically well, cracking all of the
passwords in the shadow file! Here’s what that looked like, with redactions:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>jasondew@nixos ~ $ john shadow
Created directory: /home/jasondew/.john
Using default input encoding: UTF-8
Loaded 3 password hashes with 3 different salts (descrypt, traditional crypt(3) [DES 128/128 SSE2])
Will run 8 OpenMP threads
Proceeding with single, rules:Single
Press 'q' or Ctrl-C to abort, almost any other key for status
Almost done: Processing the remaining buffered candidate passwords, if any.
Warning: Only 646 candidates buffered for the current salt, minimum 1024 needed for performance.
Warning: Only 595 candidates buffered for the current salt, minimum 1024 needed for performance.
Warning: Only 780 candidates buffered for the current salt, minimum 1024 needed for performance.
Proceeding with wordlist:/nix/store/vkic74bzbriwwlzbx8bg85cg1zbpi3py-john-1.9.0-jumbo-1/share/john/password.lst, rules:Wordlist
Proceeding with incremental:ASCII
Warning: MaxLen = 13 is too large for the current hash type, reduced to 8
kenn98 (<username 1>)
kenn98 (root)
tinandy1 (<username 2>)
3g 0:00:01:12 DONE 3/3 (2021-06-10 18:25) 0.04139g/s 13771Kp/s 14036Kc/s 14036KC/s tigieaug..timsgug2
Use the "--show" option to display all of the cracked passwords reliably
Session completed
</code></pre></div></div>
<p>At this point, I was able to log in (the root password was <code class="language-plaintext highlighter-rouge">kenn98</code>) and
immerse myself in nostalgia. I struggled a bit because I don’t have a mouse,
but I was able to find some shortcut keys to let me use the window manager. The
terminal emulator brought back the most memories. I was suprised to see that
Netscape Navigator was even installed!</p>
<p>This was my first foray into retro computing and I’ve really enjoyed it. I’m
not sure what’s next for this machine, but I think I’d like to get a working C
or Rust compiler installed. I picked up a small touchscreen monitor from the
local Goodwill and I’d like to see if I can decipher the data coming in off
it’s serial port.</p>My first “computer job” was working as the assistant sysadmin for the Math Department at the University of South Carolina. I can still remember how proud I was to have my own office and a cutting-edge Sun Microsystems Ultra 5 workstation with the matching, purple, 19” CRT. It had a 360MHz UltraSPARC-IIi processor and ran Solaris 9 with CDE as the windowing system. I learned vim, Perl, and lots more about UNIX on that computer.TIL about the cpuid instruction2021-03-11T01:37:47+00:002021-03-11T01:37:47+00:00/2021/03/11/til-about-the-cpuid-instruction<p>So I’ve been trying to learn Rust more deeply lately, and one thing I found
that I really enjoyed was Philipp Oppermann’s blog series, <a href="https://os.phil-opp.com/">“Writing an OS in Rust”</a>.
I enjoyed it so much that I sponsored his work on GitHub and so I get the monthly
newsletter, <a href="https://rust-osdev.com/this-month/2021-02/">This Month in Rust OSDev</a>. This issue linked to a repo where someone
is writing a <a href="https://gitlab.com/cdrzewiecki/celos">hobby OS</a> based off of the blog series. I thought this was super
cool, so I took a look at the code.</p>
<p>The first thing I checked out was the <a href="https://gitlab.com/cdrzewiecki/celos/-/blob/master/kernel/src/arch/x86_64/boot.asm">bootloader code</a>. This is the first
non-firmware code that gets run and is responsible for initializing the stack,
transitioning the processor, loading the kernel, and setting up the page table.
This code is written in x86 assembly and looked reasonably familiar, with <code class="language-plaintext highlighter-rouge">mov</code>,
<code class="language-plaintext highlighter-rouge">jmp</code>, <code class="language-plaintext highlighter-rouge">push</code>, <code class="language-plaintext highlighter-rouge">pull</code> instructions. Reading through a bit more, I came across
an instruction named <code class="language-plaintext highlighter-rouge">cpuid</code>. This stuck out to me because it wasn’t a “simple”
operation like moving data into a register or jumping to a memory location.
Here’s what it looked like in context:</p>
<figure class="highlight"><pre><code class="language-nasm" data-lang="nasm"><span class="nl">check_long_mode:</span>
<span class="c1">; test if extended processor info in available</span>
<span class="nf">mov</span> <span class="nb">eax</span><span class="p">,</span> <span class="mh">0x80000000</span> <span class="c1">; implicit argument for cpuid</span>
<span class="nf">cpuid</span> <span class="c1">; get highest supported argument</span>
<span class="nf">cmp</span> <span class="nb">eax</span><span class="p">,</span> <span class="mh">0x80000001</span> <span class="c1">; it needs to be at least 0x80000001</span>
<span class="nf">jb</span> <span class="nv">.no_long_mode</span> <span class="c1">; if it's less, the CPU is too old for long mode</span>
<span class="c1">; use extended info to test if long mode is available</span>
<span class="nf">mov</span> <span class="nb">eax</span><span class="p">,</span> <span class="mh">0x80000001</span> <span class="c1">; argument for extended processor info</span>
<span class="nf">cpuid</span> <span class="c1">; returns various feature bits in ecx and edx</span>
<span class="nf">test</span> <span class="nb">edx</span><span class="p">,</span> <span class="mi">1</span> <span class="o"><<</span> <span class="mi">29</span> <span class="c1">; test if the LM-bit is set in the D-register</span>
<span class="nf">jz</span> <span class="nv">.no_long_mode</span> <span class="c1">; If it's not set, there is no long mode</span>
<span class="nf">ret</span>
<span class="nl">.no_long_mode:</span>
<span class="nf">mov</span> <span class="nb">al</span><span class="p">,</span> <span class="s">"2"</span>
<span class="nf">jmp</span> <span class="nv">error</span></code></pre></figure>
<p>From the comments, it looks like it gets used to see if the processor can
transition into <a href="https://en.wikipedia.org/wiki/Long_mode">“long mode”</a>, which is essentially 64-bit mode. This got me
curious if there was other information that we could get from the CPU itself.
I checked out the <a href="https://en.wikipedia.org/wiki/CPUID">Wikipedia page</a> for it and became even more fascinated with
all of the information that you can get out of your processor. There was even
some example assembly code:</p>
<figure class="highlight"><pre><code class="language-nasm" data-lang="nasm"> <span class="nf">.data</span>
<span class="nl">s0:</span> <span class="nf">.asciz</span> <span class="err">"</span><span class="nv">CPUID</span><span class="p">:</span> <span class="o">%</span><span class="nv">x</span><span class="err">\</span><span class="nv">n</span><span class="err">"</span>
<span class="nf">.text</span>
<span class="nf">.align</span> <span class="mi">32</span>
<span class="nf">.globl</span> <span class="nv">main</span>
<span class="nl">main:</span>
<span class="nf">pushq</span> <span class="o">%</span><span class="nb">rbp</span>
<span class="nf">movq</span> <span class="o">%</span><span class="nb">rsp</span><span class="p">,</span><span class="o">%</span><span class="nb">rbp</span>
<span class="nf">subq</span> <span class="kc">$</span><span class="mi">16</span><span class="p">,</span><span class="o">%</span><span class="nb">rsp</span>
<span class="nf">movl</span> <span class="kc">$</span><span class="mi">1</span><span class="p">,</span><span class="o">%</span><span class="nb">eax</span>
<span class="nf">cpuid</span>
<span class="nf">movq</span> <span class="kc">$</span><span class="nv">s0</span><span class="p">,</span><span class="o">%</span><span class="nb">rdi</span>
<span class="nf">movl</span> <span class="o">%</span><span class="nb">eax</span><span class="p">,</span><span class="o">%</span><span class="nb">esi</span>
<span class="nf">xorl</span> <span class="o">%</span><span class="nb">eax</span><span class="p">,</span><span class="o">%</span><span class="nb">eax</span>
<span class="nf">call</span> <span class="nv">printf</span></code></pre></figure>
<p>I copied this into <code class="language-plaintext highlighter-rouge">cpuid.s</code> and found <a href="https://stackoverflow.com/questions/4288921/hello-world-using-x86-assembler-on-mac-0sx">this Stack Overflow article</a> about compiling
assembly code on MacOS (using <code class="language-plaintext highlighter-rouge">gcc -o cpuid cpuid.s</code>). Unfortunately, there were
errors. The first error was an <code class="language-plaintext highlighter-rouge">invalid alignment value</code> on the <code class="language-plaintext highlighter-rouge">.align 32</code> instruction.
I figured that it wasn’t necessary, so I removed it and moved on to the next error.
That got me to <code class="language-plaintext highlighter-rouge">32-bit absolute addressing is not supported in 64-bit mode</code> on the
<code class="language-plaintext highlighter-rouge">movq $s0,%rdi</code> instruction. After a fair bit of searching, I managed to come across
the <a href="https://code-examples.net/en/q/319865"><code class="language-plaintext highlighter-rouge">lea</code> instruction</a>, which loads the effective address of a static value using
relative addressing. Changing the <code class="language-plaintext highlighter-rouge">movq</code> to <code class="language-plaintext highlighter-rouge">leaq s0(%rip), %rdi</code> got it to compile
and work! Here’s the output from my laptop:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ gcc -o cpuid cpuid.s
$ ./cpuid
CPUID: 0x0306d4
</code></pre></div></div>
<p>I added an extra call to get the <a href="https://en.wikipedia.org/wiki/CPUID#EAX=0:_Highest_Function_Parameter_and_Manufacturer_ID">“highest function number,”</a> which is what the
original bootloader code is using to see if the CPU can be transitioned from
<a href="https://en.wikipedia.org/wiki/Real_mode">“real mode”</a> (16-bit mode) into <a href="https://en.wikipedia.org/wiki/Long_mode">“long mode”</a> (64-bit mode). Check out the
<a href="https://github.com/jasondew/cpuid">full code</a> and try it for yourself!</p>So I’ve been trying to learn Rust more deeply lately, and one thing I found that I really enjoyed was Philipp Oppermann’s blog series, “Writing an OS in Rust”. I enjoyed it so much that I sponsored his work on GitHub and so I get the monthly newsletter, This Month in Rust OSDev. This issue linked to a repo where someone is writing a hobby OS based off of the blog series. I thought this was super cool, so I took a look at the code.Handling HTTP Errors in Elm 0.162016-03-16T02:18:00+00:002016-03-16T02:18:00+00:00/2016/03/16/handling-http-errors-in-elm<article class="markdown-body entry-content" itemprop="text">
<p><strong>Expected knowledge:</strong> Basics of Elm and the Elm Architecture</p>
<p><strong>Read to learn:</strong> How to handle HTTP errors in Elm</p>
<p>I remember one of the things I struggled with writing my first Elm program (other than writing a JSON decoder) was dealing with non-successful HTTP responses. I've heard the question asked a few times in the Elm Slack channel and decided that I would share how I solved the problem. I'll assume that you're following the Elm Architecture.</p>
<p><strong>Status Quo</strong></p>
<p>Let's start by looking at the signature of the <code>Http.get</code> function:</p>
<script src="https://gist.github.com/fcbdfd4935e0b88ab693.js"> </script>
<p>What we need out of this is an <code>Effects Action</code> to return in our <code>update</code> function. Here's how things might look to start with:</p>
<script src="https://gist.github.com/e1172220dfbfd98845e2.js"> </script>
<p>This works great except that we throw away information about the failure case. We either get a <code>Just Response</code> or <code>Nothing</code>!</p>
<p><strong>Retaining Error Information</strong></p>
<p>Let's see how we can do better. First, we need to change our <code>Updated</code> action to account for errors by accepting a <code>Result</code> that can contain either an <code>Error</code> or a <code>Response</code>:</p>
<script src="https://gist.github.com/9dd14cad30e0a9fae891.js"> </script>
<p>Then we can update our request function to send the errors along as well:</p>
<script src="https://gist.github.com/7892e8076a4fea4006db.js"> </script>
<p>Our update function will need to change as well to accomodate the extra information we're passing along:</p>
<script src="https://gist.github.com/9a08d89d41e00871a91f.js"> </script>
<p>This is a big step forward! Now we can update our UI to reflect the fact that something's gone wrong or log the failure to the console.</p>
<p><strong>An Alternative Approach</strong></p>
<p>If we want, though, we can also have a generic error action that we can reuse. Here's what that approach would look like:</p>
<script src="https://gist.github.com/d83343901d15b1e8c142.js"> </script>
<p>Of course, instead of just logging the errors to the console we can show the user an error state, retry, or anything else! I hope this helps but if not, ping me on the Elm slack channel.</p>
</article>Jason DewExpected knowledge: Basics of Elm and the Elm Architecture Read to learn: How to handle HTTP errors in Elm I remember one of the things I struggled with writing my first Elm program (other than writing a JSON decoder) was dealing with non-successful HTTP responses. I've heard the question asked a few times in the Elm Slack channel and decided that I would share how I solved the problem. I'll assume that you're following the Elm Architecture. Status Quo Let's start by looking at the signature of the Http.get function: What we need out of this is an Effects Action to return in our update function. Here's how things might look to start with: This works great except that we throw away information about the failure case. We either get a Just Response or Nothing! Retaining Error Information Let's see how we can do better. First, we need to change our Updated action to account for errors by accepting a Result that can contain either an Error or a Response: Then we can update our request function to send the errors along as well: Our update function will need to change as well to accomodate the extra information we're passing along: This is a big step forward! Now we can update our UI to reflect the fact that something's gone wrong or log the failure to the console. An Alternative Approach If we want, though, we can also have a generic error action that we can reuse. Here's what that approach would look like: Of course, instead of just logging the errors to the console we can show the user an error state, retry, or anything else! I hope this helps but if not, ping me on the Elm slack channel.HWC: The Start2014-03-01T01:27:00+00:002014-03-01T01:27:00+00:00/2014/03/01/hwc-start<span style="font-family: Arial; font-size: 13px;">As part of our Harvest annual review, I declared that I wanted to improve on my writing ability in the coming year. Other people showed a similar interest and so the Harvest Writing club was born and this is my first entry for it.</span><br /><span style="font-family: Arial; font-size: 13px;"></span><br /><span style="font-family: Arial; font-size: 13px;">About 3 years ago my wife, son, and I were living in a suburb of Columbia, South Carolina. It was a nice town and we had nice neighbors that we enjoyed spending time with. We even joined the Citizen's Police Academy, which I recommend. Even so, we felt somewhat unfulfilled. After work, there wasn't much of anything that I wanted to do other than mow the lawn or wash the car. My wife and I both grew up in rural areas and we reminisced about what it was like there as kids. We wanted our son to be able to have the same experiences and freedoms. We wanted to live in the country again.</span><br /><span style="font-family: Arial; font-size: 13px;"></span><br /><span style="font-family: Arial; font-size: 13px;">So we started shopping for some land near where we were living. Of course, being near a city, raw land was not affordable so we expanded our search. This went on for about six months but we didn't find anything that fit the bill. Then, during a holiday spent with my family, we found out that roughly 55 acres were for sale next to my grandparent's. Even better was that the sellers were asking a reasonable price! We hadn't considered moving back to my hometown before but then the idea took root. It helped that at the time my grandfather, whom I've always been rather close to, became ill.</span><br /><span style="font-family: Arial; font-size: 13px;"></span><br /><span style="font-family: Arial; font-size: 13px;">Even though 55 acres was so much more than we ever thought we'd want, it was just what we were looking for. Secluded enough to be peaceful but with some civilization within a half hour drive. The land also bordered my grandparents and a couple other family members' homes. As if that weren't enough, my mom also owns a vacant house that is within walking distance to the land.</span><br /><span style="font-family: Arial; font-size: 13px;"></span><br /><span style="font-family: Arial; font-size: 13px;">My grandmother spearheaded the negotiations with the seller and drove quite a hard bargain. They accepted our offer and we made plans to move into my mom's old house. In December of 2011 we made the move. It was an easy transition and felt great. We were able to ride ATVs and enjoy being outside.</span><br /><span style="font-family: Arial; font-size: 13px;"></span><br /><span style="font-family: Arial; font-size: 13px;">A few months after the move, we had some land that was becoming overgrown with weeds. Based on my cousin's advice, we purchased five goats to help clean up. We fell in love pretty quickly. They were so cute and curious. Pretty soon we had a herd and we started a business around breeding them. Some time after, we rescued a horse, got a herd guardian dog, and bought some more horses. At this point we have about 12 goats, 4 horses, and 4 dogs. We love them all and can't imagine life without them.</span><br /><span style="font-family: Arial; font-size: 13px;"></span><br /><span style="font-family: Arial; font-size: 13px;">Immediately after the move, we began saving and making plans to build a house. We must have gone through hundreds of house plans, at least a couple dozen banks, and three home builders before we decided. Then we spent the next couple of weeks trying to squeeze everything we've ever wanted into a house within our budget. In November of 2013, we broke ground.</span><br /><span style="font-family: Arial; font-size: 13px;"></span><br /><span style="font-family: Arial; font-size: 13px;">As of today, February 28th, 2014, we are within 30 days of moving into our "forever home." I'm full of pride at what we've accomplished, how hard we've worked, and how long we've waited. I think that it's a great start to a happy rest of our lives.</span>Jason DewAs part of our Harvest annual review, I declared that I wanted to improve on my writing ability in the coming year. Other people showed a similar interest and so the Harvest Writing club was born and this is my first entry for it.About 3 years ago my wife, son, and I were living in a suburb of Columbia, South Carolina. It was a nice town and we had nice neighbors that we enjoyed spending time with. We even joined the Citizen's Police Academy, which I recommend. Even so, we felt somewhat unfulfilled. After work, there wasn't much of anything that I wanted to do other than mow the lawn or wash the car. My wife and I both grew up in rural areas and we reminisced about what it was like there as kids. We wanted our son to be able to have the same experiences and freedoms. We wanted to live in the country again.So we started shopping for some land near where we were living. Of course, being near a city, raw land was not affordable so we expanded our search. This went on for about six months but we didn't find anything that fit the bill. Then, during a holiday spent with my family, we found out that roughly 55 acres were for sale next to my grandparent's. Even better was that the sellers were asking a reasonable price! We hadn't considered moving back to my hometown before but then the idea took root. It helped that at the time my grandfather, whom I've always been rather close to, became ill.Even though 55 acres was so much more than we ever thought we'd want, it was just what we were looking for. Secluded enough to be peaceful but with some civilization within a half hour drive. The land also bordered my grandparents and a couple other family members' homes. As if that weren't enough, my mom also owns a vacant house that is within walking distance to the land.My grandmother spearheaded the negotiations with the seller and drove quite a hard bargain. They accepted our offer and we made plans to move into my mom's old house. In December of 2011 we made the move. It was an easy transition and felt great. We were able to ride ATVs and enjoy being outside.A few months after the move, we had some land that was becoming overgrown with weeds. Based on my cousin's advice, we purchased five goats to help clean up. We fell in love pretty quickly. They were so cute and curious. Pretty soon we had a herd and we started a business around breeding them. Some time after, we rescued a horse, got a herd guardian dog, and bought some more horses. At this point we have about 12 goats, 4 horses, and 4 dogs. We love them all and can't imagine life without them.Immediately after the move, we began saving and making plans to build a house. We must have gone through hundreds of house plans, at least a couple dozen banks, and three home builders before we decided. Then we spent the next couple of weeks trying to squeeze everything we've ever wanted into a house within our budget. In November of 2013, we broke ground.As of today, February 28th, 2014, we are within 30 days of moving into our "forever home." I'm full of pride at what we've accomplished, how hard we've worked, and how long we've waited. I think that it's a great start to a happy rest of our lives.Turing Machines explained from the ground up2012-05-07T13:16:00+00:002012-05-07T13:16:00+00:00/2012/05/07/turing-machines-explained-from-ground-up<p>I gave a talk at <a href="http://convergese.org/">ConvergeSE 2012</a> on this topic so I thought I'd write it up as a blog post as well. That said, let's jump right in.</p><p><span style="font-size:medium;">Turing</span></p><p>So the obvious thing to start with is Alan Turing, the man for which Turing machines are named. Turing was a British mathematician sometimes called the "father of Computer Science." I call him a mathematician because he did computer science before there was a discipline so named. He was influential in several fields, including AI and cryptanalysis. Specifically, he worked at Bletchley Park during World War II where he worked on breaking communications encrypted by the German enigma. His first major achievement was a paper written in 1936, before he had obtained his Ph.D., proving that the Entscheidungsproblem had no solution.</p><p>The Entscheidungsproblem was proposed by a very famous mathematician named David Hilbert in 1928. Simply put, the question is whether or not an "algorithm" could be devised to determine if a statement in first-order logic is universally valid. To accomplish this, Turing devised a theoretical machine that he used to answer this question in the negative. The machine became known as a Turing machine.</p><p><span style="font-size:medium;">Background</span></p><p>In order to understand this machine, we need to start with some terminology. Most of these have roots in language, so they'll seem familiar. First, we have an <strong>alphabet</strong>. This is simpy a set of symbols. For example, we have the set of lowercase roman letters, denoted: {"a", "b", "c", ..., "z"}. Another example, that we'll use later, is the <strong>binary alphabet</strong> or {"0", "1"}. We can represent any number with just these two symbols.</p><p>The natural thing to do with a set is to combine the elements into a sequence of symbols. This construction is called a <strong>string</strong>. Some examples from the binary alphabet are "0", "0101001", and the empty string. So, to reiterate, zero or more symbols from an alphabet gets you a string.</p><p>Finally, a <strong>formal language</strong> is a set of strings "over" an alphabet. For example, we could define a formal language of metasyntactic variables: {"foo", "bar", "baz", "quux"}. The alphabet here is taken to be the lowercase roman letters, as before. Another example is the two-digit binary numbers: {"00", "01", "10", "11"}. Notice that both of these sets are finite but this doesn't have to be the case. Consider the alphabet {"a", "b"} and the formal language {"b", "ab", "aab", "aaab", ...} which is the set of zero of more "a"s followed by a single "b". You could write this more compactly as a*b but more on that later.</p><p><span style="font-size:medium;">Finite Automata</span></p><p>Now for the fun stuff: Deterministic Finite Automata, also known as DFAs or just finite state machines. These simple "machine"s are defined by an alphabet, a set of states, and a transition function. The alphabet defines the valid symbols that the machine can take as input. One of the states is defined as the <strong>start state</strong> and one or more are defined as <strong>accepting states</strong>. The starting state is pretty obvious, it's just the initial state of the machine. The accepting states determine whether or not the machine "accepts" the input it was given. The most interesting part, though, is the transition function. It takes a symbol, from the input stream, and the current state the machine is in and returns the new state we will transition to. Here's a graphical representation of a DFA that accepts binary strings that are a multiple of 3:</p><p><span style="font-family:Times;font-size:small;"><strong><div class='p_embed p_image_embed'><img alt="500px-dfa_example_multiplies_of_3" height="213" src="http://jasondew.files.wordpress.com/2012/05/500px-dfa_example_multiplies_of_3-svg-scaled500.png" width="500" /></div></strong></span></p><p>The notation requires some explanation. The circled values represent states with the start state denoted by an arrow pointing to it and the accepting states denoted with double circles. The arrows leaving and entering the states are the transition. Taken all together, they define the transition function for this DFA. The arrow from state S_0 to S_1 with the label "1" means that if we are in state S_0 and we read a "1" on the input stream then we should transition to state S_2. Furthermore, just from the diagram we can infer that the alphabet here is {"0", "1"}. This is because DFAs must have a leaving arrow for each symbol on each state. This is the deterministic property.</p><p>It will really help cement the concept if you run through a few examples. Consider the input "00". We start at state S_0 and see a "0" so we stay in the same state. When we see the second "0", we again transition to state S_0. Since we're out of input at this point, we consider whether or not we're in an accepting state. It turns out we are, which means that the machine has accepted the input. In this case, the machine is saying that "00" (0 in decimal) is a multiple of 3. Since 0*3=0, we can see this is true. Consider the input "1". In this case, we end up in state S_1 which is not an accepting state. Therefore, the machine rejects that input. This makes sense because there is no (integer) value such that x * 3 = 1.</p><p>Now, consider what would happen if we relax the deterministic constraint. This would mean that the transition function can now return zero or more states. In other words, we can have no transition out of a state at all (a sink), a single transition (as before), or multiple transitions. In the final case, we're effectively allowing the machine to split itself into how ever many transitions there are. This effective gives us a tree of automata. Obviously, this gives us quite a bit more expressive power.</p><p>These machines are called nondeterministic finite automata or NFAs. Lets look at an example:</p><p><div class='p_embed p_image_embed'><img alt="500px-nfasimpleexample" height="273" src="http://jasondew.files.wordpress.com/2012/05/500px-nfasimpleexample-svg-scaled500.png" width="500" /></div></p><p>This machine is using the same alphabet as before and has only two states, p and q. Here, the starting state is p and q is the only accepting state. Notice that when in state p and seeing a "1", we simultaneously stay in the p state and also move to the q state. You can think of this as multiple universes or cloning the machine so that we keep track of all possible paths. It turns out that this machine accepts any binary string that ends with a "1".</p><p>Surprisingly, NFAs and DFAs are equivalent in expressive power. That is, any NFA can be converted into a DFA that accepts the same string. So even though we allow non-determinism, we can still convert it into an equivalent, but generally larger, DFA.</p><p><span style="font-size:medium;">Regular languages</span></p><p>The set of strings that a finite automaton accepts is called it's language. Conversely, a <strong>regular language</strong> is any language that can be recognized by a finite automaton, either deterministic or not. Even more interesting is that a language is regular if and only if some <strong>regular expression</strong> describes it. What this means is that regular expressions and DFAs have the same capability in describing languages. So we can convert from a DFA into a regular expression and vice versa.</p><p><span style="font-size:medium;">Turing Machines</span></p><p>Finally we're ready to describe Turing Machines. They are just a small step up in complexity from the finite automata we just looked at. We now have a name for the stream of input, the tape. Turing machines can read/write to/from the tape as well as control it's movement. So, now we need two alphabets: the input alphabet and the output (or tape) alphabet. We still have a set of states, except that now we will have one starting state, one accepting state, and one rejecting state. There will still be a transition function, except that now it takes a state and a symbol from the current position on the tape and returns a new state, possibly a symbol to write, and a direction to move (either left or right). Notice that we don't have to write a symbol, but we do have to move.</p><p>Lets look at an example Turing Machine. We'll name it M for machine. It's going to have an input alphabet of {"0"} and a tape alphabet of {"_", "x"}. The machine will accept "0" strings whose length is a power of 2. That is, string's whose length can be expressed as 2^x for some integer x. For example, "0", "00", and "0000" are the smallest strings that should be accepted because they are of length 1 (2^0), 2 (2^1), and 4 (2^2), respectively. To be clear, the machine would reject strings like "000" and "00000".</p><p><div class='p_embed p_image_embed'><a href="http://jasondew.files.wordpress.com/2012/05/screen_shot_2012-05-04_at_10-25-58_pm-scaled1000.png"><img alt="Screen_shot_2012-05-04_at_10" height="296" src="http://jasondew.files.wordpress.com/2012/05/screen_shot_2012-05-04_at_10-25-58_pm-scaled1000.png?w=300" width="500" /></a></div></p><p>This description should look familiar. The labels on the transition arrows has gotten a little more interesting. They are in the form "symbol -> [symbol,] direction" where the first symbol defines when this transition is applicable, the second symbol is optional and defines the symbol to write to the tape, and the direction is either L or R for moving the left or right on the tape.</p><p>So if we imagine the tape with a "." at the current position, the transition of states for the input "00" goes something like this:</p><p><div class='p_embed p_image_embed'><img alt="Screen_shot_2012-05-04_at_10" height="372" src="http://jasondew.files.wordpress.com/2012/05/screen_shot_2012-05-04_at_10-31-26_pm-scaled500.png" width="175" /></div></p><p>You should try "running" the machine on other inputs, making a table similar to the one above. Basically, the procedure is to mark off half of the 0s on each pass. If we run out of zeros during the intermediate stage, then we know we should reject. Otherwise, we accept the string.</p><p>Once you get comfortable with what this machine is doing, you'll notice that each state has a specific purpose. For example, the state q1 marks the first 0 with an _ so that it knows when we're at the beginning of the string. States q2, q3, and q4 are doing the builk of the work, marking through the "0"s with "x"s and moving the tape. State q5 is a reset procedure, moving us back to the beginning of the input.</p><p><span style="font-size:medium;">Conclusion</span></p><p><span style="font-size:medium;"><span style="font-size:small;">So why do we care about Turing machines? First of all, they define what an algorithm is, in more concrete terms. This is relatd to the Church-Turing thesis and what it means to be "computable." It also turns out that they are equivalent in power to any other reasonable computational model. This is fairly surprising considering how relatively simple they are. They represent the essence of what it is to be a "computer."</span></span></p><p><span style="font-size:medium;"><span style="font-size:small;"><br /></span></span></p><p><span style="font-size:medium;">References</span></p><p><span style="font-size:x-small;"><strong style="font-family:Times;font-size:medium;"><span style="font-family:Times New Roman;background-color:transparent;font-weight:normal;font-style:italic;vertical-align:baseline;">Deterministic finite automaton</span><span style="font-family:Times New Roman;background-color:transparent;font-weight:normal;vertical-align:baseline;">. (2012, March 11). Retrieved from <a href="http://en.wikipedia.org/wiki/Deterministic_finite_automaton">http://en.wikipedia.org/wiki/Deterministic_finite_automaton</a></span><br /><span style="font-family:Times New Roman;background-color:transparent;font-weight:normal;vertical-align:baseline;"> </span><br /><span style="font-family:Times New Roman;background-color:transparent;font-weight:normal;font-style:italic;vertical-align:baseline;">Nondeterministic finite automaton</span><span style="font-family:Times New Roman;background-color:transparent;font-weight:normal;vertical-align:baseline;">. (2012, April 20). Retrieved from <a href="http://en.wikipedia.org/wiki/Nondeterministic_finite_automaton">http://en.wikipedia.org/wiki/Nondeterministic_finite_automaton</a></span><p /><span style="font-family:Times New Roman;background-color:transparent;font-weight:normal;vertical-align:baseline;">Petzold, C. (2008). </span><span style="font-family:Times New Roman;background-color:transparent;font-weight:normal;font-style:italic;vertical-align:baseline;">The annotated turing</span><span style="font-family:Times New Roman;background-color:transparent;font-weight:normal;vertical-align:baseline;">. Indianapolis: Wiley Publishing, Inc.</span><p /><span style="font-family:Times New Roman;background-color:transparent;font-weight:normal;vertical-align:baseline;">Sipser, M. (2006). </span><span style="font-family:Times New Roman;background-color:transparent;font-weight:normal;font-style:italic;vertical-align:baseline;">Introduction to the theory of computation</span><span style="font-family:Times New Roman;background-color:transparent;font-weight:normal;vertical-align:baseline;">. (2nd ed.). Boston: Thompson Course Technology.</span><p /><span style="font-family:Times New Roman;background-color:transparent;font-weight:normal;font-style:italic;vertical-align:baseline;">Turing machine</span><span style="font-family:Times New Roman;background-color:transparent;font-weight:normal;vertical-align:baseline;">. (2012, April 17). Retrieved from <a href="http://en.wikipedia.org/wiki/Turing_machine">http://en.wikipedia.org/wiki/Turing_machine</a></span></strong></span></p>Jason DewI gave a talk at ConvergeSE 2012 on this topic so I thought I'd write it up as a blog post as well. That said, let's jump right in.TuringSo the obvious thing to start with is Alan Turing, the man for which Turing machines are named. Turing was a British mathematician sometimes called the "father of Computer Science." I call him a mathematician because he did computer science before there was a discipline so named. He was influential in several fields, including AI and cryptanalysis. Specifically, he worked at Bletchley Park during World War II where he worked on breaking communications encrypted by the German enigma. His first major achievement was a paper written in 1936, before he had obtained his Ph.D., proving that the Entscheidungsproblem had no solution.The Entscheidungsproblem was proposed by a very famous mathematician named David Hilbert in 1928. Simply put, the question is whether or not an "algorithm" could be devised to determine if a statement in first-order logic is universally valid. To accomplish this, Turing devised a theoretical machine that he used to answer this question in the negative. The machine became known as a Turing machine.BackgroundIn order to understand this machine, we need to start with some terminology. Most of these have roots in language, so they'll seem familiar. First, we have an alphabet. This is simpy a set of symbols. For example, we have the set of lowercase roman letters, denoted: {"a", "b", "c", ..., "z"}. Another example, that we'll use later, is the binary alphabet or {"0", "1"}. We can represent any number with just these two symbols.The natural thing to do with a set is to combine the elements into a sequence of symbols. This construction is called a string. Some examples from the binary alphabet are "0", "0101001", and the empty string. So, to reiterate, zero or more symbols from an alphabet gets you a string.Finally, a formal language is a set of strings "over" an alphabet. For example, we could define a formal language of metasyntactic variables: {"foo", "bar", "baz", "quux"}. The alphabet here is taken to be the lowercase roman letters, as before. Another example is the two-digit binary numbers: {"00", "01", "10", "11"}. Notice that both of these sets are finite but this doesn't have to be the case. Consider the alphabet {"a", "b"} and the formal language {"b", "ab", "aab", "aaab", ...} which is the set of zero of more "a"s followed by a single "b". You could write this more compactly as a*b but more on that later.Finite AutomataNow for the fun stuff: Deterministic Finite Automata, also known as DFAs or just finite state machines. These simple "machine"s are defined by an alphabet, a set of states, and a transition function. The alphabet defines the valid symbols that the machine can take as input. One of the states is defined as the start state and one or more are defined as accepting states. The starting state is pretty obvious, it's just the initial state of the machine. The accepting states determine whether or not the machine "accepts" the input it was given. The most interesting part, though, is the transition function. It takes a symbol, from the input stream, and the current state the machine is in and returns the new state we will transition to. Here's a graphical representation of a DFA that accepts binary strings that are a multiple of 3:The notation requires some explanation. The circled values represent states with the start state denoted by an arrow pointing to it and the accepting states denoted with double circles. The arrows leaving and entering the states are the transition. Taken all together, they define the transition function for this DFA. The arrow from state S_0 to S_1 with the label "1" means that if we are in state S_0 and we read a "1" on the input stream then we should transition to state S_2. Furthermore, just from the diagram we can infer that the alphabet here is {"0", "1"}. This is because DFAs must have a leaving arrow for each symbol on each state. This is the deterministic property.It will really help cement the concept if you run through a few examples. Consider the input "00". We start at state S_0 and see a "0" so we stay in the same state. When we see the second "0", we again transition to state S_0. Since we're out of input at this point, we consider whether or not we're in an accepting state. It turns out we are, which means that the machine has accepted the input. In this case, the machine is saying that "00" (0 in decimal) is a multiple of 3. Since 0*3=0, we can see this is true. Consider the input "1". In this case, we end up in state S_1 which is not an accepting state. Therefore, the machine rejects that input. This makes sense because there is no (integer) value such that x * 3 = 1.Now, consider what would happen if we relax the deterministic constraint. This would mean that the transition function can now return zero or more states. In other words, we can have no transition out of a state at all (a sink), a single transition (as before), or multiple transitions. In the final case, we're effectively allowing the machine to split itself into how ever many transitions there are. This effective gives us a tree of automata. Obviously, this gives us quite a bit more expressive power.These machines are called nondeterministic finite automata or NFAs. Lets look at an example:This machine is using the same alphabet as before and has only two states, p and q. Here, the starting state is p and q is the only accepting state. Notice that when in state p and seeing a "1", we simultaneously stay in the p state and also move to the q state. You can think of this as multiple universes or cloning the machine so that we keep track of all possible paths. It turns out that this machine accepts any binary string that ends with a "1".Surprisingly, NFAs and DFAs are equivalent in expressive power. That is, any NFA can be converted into a DFA that accepts the same string. So even though we allow non-determinism, we can still convert it into an equivalent, but generally larger, DFA.Regular languagesThe set of strings that a finite automaton accepts is called it's language. Conversely, a regular language is any language that can be recognized by a finite automaton, either deterministic or not. Even more interesting is that a language is regular if and only if some regular expression describes it. What this means is that regular expressions and DFAs have the same capability in describing languages. So we can convert from a DFA into a regular expression and vice versa.Turing MachinesFinally we're ready to describe Turing Machines. They are just a small step up in complexity from the finite automata we just looked at. We now have a name for the stream of input, the tape. Turing machines can read/write to/from the tape as well as control it's movement. So, now we need two alphabets: the input alphabet and the output (or tape) alphabet. We still have a set of states, except that now we will have one starting state, one accepting state, and one rejecting state. There will still be a transition function, except that now it takes a state and a symbol from the current position on the tape and returns a new state, possibly a symbol to write, and a direction to move (either left or right). Notice that we don't have to write a symbol, but we do have to move.Lets look at an example Turing Machine. We'll name it M for machine. It's going to have an input alphabet of {"0"} and a tape alphabet of {"_", "x"}. The machine will accept "0" strings whose length is a power of 2. That is, string's whose length can be expressed as 2^x for some integer x. For example, "0", "00", and "0000" are the smallest strings that should be accepted because they are of length 1 (2^0), 2 (2^1), and 4 (2^2), respectively. To be clear, the machine would reject strings like "000" and "00000".This description should look familiar. The labels on the transition arrows has gotten a little more interesting. They are in the form "symbol -> [symbol,] direction" where the first symbol defines when this transition is applicable, the second symbol is optional and defines the symbol to write to the tape, and the direction is either L or R for moving the left or right on the tape.So if we imagine the tape with a "." at the current position, the transition of states for the input "00" goes something like this:You should try "running" the machine on other inputs, making a table similar to the one above. Basically, the procedure is to mark off half of the 0s on each pass. If we run out of zeros during the intermediate stage, then we know we should reject. Otherwise, we accept the string.Once you get comfortable with what this machine is doing, you'll notice that each state has a specific purpose. For example, the state q1 marks the first 0 with an _ so that it knows when we're at the beginning of the string. States q2, q3, and q4 are doing the builk of the work, marking through the "0"s with "x"s and moving the tape. State q5 is a reset procedure, moving us back to the beginning of the input.ConclusionSo why do we care about Turing machines? First of all, they define what an algorithm is, in more concrete terms. This is relatd to the Church-Turing thesis and what it means to be "computable." It also turns out that they are equivalent in power to any other reasonable computational model. This is fairly surprising considering how relatively simple they are. They represent the essence of what it is to be a "computer."ReferencesDeterministic finite automaton. (2012, March 11). Retrieved from http://en.wikipedia.org/wiki/Deterministic_finite_automaton Nondeterministic finite automaton. (2012, April 20). Retrieved from http://en.wikipedia.org/wiki/Nondeterministic_finite_automatonPetzold, C. (2008). The annotated turing. Indianapolis: Wiley Publishing, Inc.Sipser, M. (2006). Introduction to the theory of computation. (2nd ed.). Boston: Thompson Course Technology.Turing machine. (2012, April 17). Retrieved from http://en.wikipedia.org/wiki/Turing_machinecoded_options: A new Ruby gem for coded fields2010-07-12T21:57:00+00:002010-07-12T21:57:00+00:00/2010/07/12/codedoptions-new-ruby-gem-for-coded<p>I recently started a couple new projects (Rails 3 + mongoid) and I've noticed a pattern in the way I handle coded fields. My key example is something like the following: you have a field, say status, that can have several values, say active, closed, and invalid. Obviously you could store those as strings in the database or you can code them, say 0, 1, and 2. Normal database practice is to code them as integers to save space but a far more important concern is that clients change their mind about what you call things. So its much easier to just change the string values in some code than it is to go changing every string in the database.</p><p>Anyway, the usage is something like this (yanked directly from the README):</p><p>Here line 4 (the coded_options call) basically gets mapped into lines 6 through 14. Nothing spectacular but it's really cleaned up my code quite a bit so maybe it will be useful to some other folks. The code is up on github (<a href="http://github.com/jasondew/coded_options">http://github.com/jasondew/coded_options</a>) and the gem is on gemcutter (<a href="http://rubygems.org/gems/coded_options">http://rubygems.org/gems/coded_options</a>).</p>Jason DewI recently started a couple new projects (Rails 3 + mongoid) and I've noticed a pattern in the way I handle coded fields. My key example is something like the following: you have a field, say status, that can have several values, say active, closed, and invalid. Obviously you could store those as strings in the database or you can code them, say 0, 1, and 2. Normal database practice is to code them as integers to save space but a far more important concern is that clients change their mind about what you call things. So its much easier to just change the string values in some code than it is to go changing every string in the database.Anyway, the usage is something like this (yanked directly from the README):Here line 4 (the coded_options call) basically gets mapped into lines 6 through 14. Nothing spectacular but it's really cleaned up my code quite a bit so maybe it will be useful to some other folks. The code is up on github (http://github.com/jasondew/coded_options) and the gem is on gemcutter (http://rubygems.org/gems/coded_options).The importance of a good algorithm2010-07-10T01:15:00+00:002010-07-10T01:15:00+00:00/2010/07/10/the-importance-of-good-algorithm<p>I'm studying for the computer science Ph.D. qualifying exam and so I've started going back through my algorithms book (Intro to Algorithms by Cormen, Leiserson, Rivest, and Stein). The first chapter was, of course, about motivating the study of algorithms. One exercise that made an impression on me was the one that had you generate a table giving the largest problem you could solve in different amounts of time given different asymptotically complex algorithms.</p><p>Since I take every opportunity to make progress learning Haskell, I coded it up:</p><p>What the table shows is the largest value of n you could process given an algorithm of certain complexity and a certain amount of time:</p><table> <tr><th>f(n)</th> <th>1 sec.</th> <th>1 min.</th> <th>1 hour</th> <th>1 day</th> <th>1 month</th> <th>1 year</th> <th>1 century</th></tr> <tr><th>lg(n)</th><td>2.7e43</td><td>∞*</td><td>∞*</td><td>∞*</td><td>∞*</td><td>∞*</td><td>∞*</td></tr><tr><th>sqrt(n)</th><td>10000</td><td>3.6e8</td><td>1.3e11</td><td>7.5e13</td><td>6.7e16</td><td>9.7e18</td><td>9.7e22</td></tr><tr><th>n</th><td>100</td><td>6000</td><td>3.6e5</td><td>8.6e6</td><td>2.6e8</td><td>3.1e9</td><td>3.1e11</td></tr><tr><th>n lg(n)</th><td>29</td><td>884</td><td>34458</td><td>6.5e5</td><td>1.6e7</td><td>1.6e8</td><td>1.3e10</td></tr><tr><th>n^2</th><td>10</td><td>77</td><td>600</td><td>2939</td><td>16099</td><td>55770</td><td>5.6e5</td></tr><tr><th>n^3</th><td>4</td><td>17</td><td>68</td><td>194</td><td>597</td><td>1357</td><td>6203</td></tr><tr><th>2^n</th><td>6</td><td>12</td><td>18</td><td>23</td><td>27</td><td>31</td><td>38</td></tr><tr><th>n!</th><td>4</td><td>7</td><td>8</td><td>10</td><td>11</td><td>12</td><td>14</td></tr></table><p>* The values here aren't really infinity <em>but</em> they are over 300 digits!</p><p>So the lesson here? Having an algorithm with a good asymptotic complexity makes a huge difference in the amount of data it is feasible to process. Just look at the difference between linear complexity (n) and logarithmic complexity (lg n): 41 orders of magnitude!</p>Jason DewI'm studying for the computer science Ph.D. qualifying exam and so I've started going back through my algorithms book (Intro to Algorithms by Cormen, Leiserson, Rivest, and Stein). The first chapter was, of course, about motivating the study of algorithms. One exercise that made an impression on me was the one that had you generate a table giving the largest problem you could solve in different amounts of time given different asymptotically complex algorithms.Since I take every opportunity to make progress learning Haskell, I coded it up:What the table shows is the largest value of n you could process given an algorithm of certain complexity and a certain amount of time: f(n) 1 sec. 1 min. 1 hour 1 day 1 month 1 year 1 century lg(n)2.7e43∞*∞*∞*∞*∞*∞*sqrt(n)100003.6e81.3e117.5e136.7e169.7e189.7e22n10060003.6e58.6e62.6e83.1e93.1e11n lg(n)29884344586.5e51.6e71.6e81.3e10n^21077600293916099557705.6e5n^341768194597135762032^n6121823273138n!47810111214* The values here aren't really infinity but they are over 300 digits!So the lesson here? Having an algorithm with a good asymptotic complexity makes a huge difference in the amount of data it is feasible to process. Just look at the difference between linear complexity (n) and logarithmic complexity (lg n): 41 orders of magnitude!Named instances for ActiveRecord2009-10-08T19:00:00+00:002009-10-08T19:00:00+00:00/2009/10/08/named-instances-for-activerecord<p>For a project that I'm working on at my day job, we have a governmental client for which we are building a pretty large and complicated online/offline Ruby on Rails application. As part of this app there are tons of data-specific rules. For example, if a client with HIV is being assessed then certain fields may have to be displayed/hidden and there are rules that get applied differently. So lets say you have the following setup:</p>
<script src="https://gist.github.com/205241.js"> </script>
<p>Now, somewhere in your code you want to be able to do take a specific action only if that client has a particular diagnosis. Without named_instances you might do something like:</p>
<script src="https://gist.github.com/205250.js"> </script>
<p>With named_instances you can do the following faster and more concise code:</p>
<script src="https://gist.github.com/205253.js"> </script>
<p>We've been using this functionality for about 6 months now and its been great. The gem is out on <a href="http://gemcutter.org/gems/named_instances">GemCutter</a> (which rocks) and the repo is at <a href="http://github.com/jasondew/named_instances">GitHub</a>. Hopefully it will be useful in your projects. Comments, criticisms, and patches welcome.</p>Jason DewFor a project that I'm working on at my day job, we have a governmental client for which we are building a pretty large and complicated online/offline Ruby on Rails application. As part of this app there are tons of data-specific rules. For example, if a client with HIV is being assessed then certain fields may have to be displayed/hidden and there are rules that get applied differently. So lets say you have the following setup: Now, somewhere in your code you want to be able to do take a specific action only if that client has a particular diagnosis. Without named_instances you might do something like: With named_instances you can do the following faster and more concise code: We've been using this functionality for about 6 months now and its been great. The gem is out on GemCutter (which rocks) and the repo is at GitHub. Hopefully it will be useful in your projects. Comments, criticisms, and patches welcome.Weirdest test failure ever…2009-09-17T19:00:00+00:002009-09-17T19:00:00+00:00/2009/09/17/weirdest-test-failure-ever<p>So based on some customer feedback on a project I'm currently working on, I added a validation requiring that one of the dates be in the past. The validation is pretty straightforward:</p>
<script src="https://gist.github.com/188474.js"> </script>
<p>After re-running the test suite though, I got a couple of failures. After digging into the code, it turns out that the following code snippit evaluates to true!</p>
<script src="https://gist.github.com/188475.js"> </script>
<p>Of course this makes no sense. What's more is that I tried the code again while writing this blog post and now its false. So I did some digging into the Ruby internals and it turns out that <code>Time</code> is implemented in C in the following method:</p>
<script src="https://gist.github.com/189385.js"> </script>
<p>The best I can come up with is that there's an problem with the GetTimeval method since its just a macro that pulls out some data from a time struct -- but the Date class is implemented in pure Ruby. Anyone come across this or can explain better?</p>Jason DewSo based on some customer feedback on a project I'm currently working on, I added a validation requiring that one of the dates be in the past. The validation is pretty straightforward: After re-running the test suite though, I got a couple of failures. After digging into the code, it turns out that the following code snippit evaluates to true! Of course this makes no sense. What's more is that I tried the code again while writing this blog post and now its false. So I did some digging into the Ruby internals and it turns out that Time is implemented in C in the following method: The best I can come up with is that there's an problem with the GetTimeval method since its just a macro that pulls out some data from a time struct -- but the Date class is implemented in pure Ruby. Anyone come across this or can explain better?