Yearly Archives: 2010

It Begins

IMG_2229

And so it begins, another of us catches the motorcycle bug. My nieces got a tiny Kawasaki dirt bike for Christmas, complete with Training wheels. Scarlett wasn’t that interested, but Adelaide got right into it and was reving the engine and asking when the weather was going to be clear enough for her to take a spin around the yard.

Focused

IMG_1971

I went with my nieces to the ice skating rink yesterday. Scarlett had fun, but was very focused during her time on the rink. You can really see the effort that she’s putting in.

Bloom Energy

IMG_0688

Well, tomorrow will mark my last day at Apple. The last two and a half years have been an incredible experience. Interesting, educational, exasperating, and still hard for me to believe. I think it will be a long time before I’ve fully processed all the experiences I’ve had working at Apple, but that processing can begin as my days at Apple have come to a close.

A great opportunity came up with Bloom Energy so I’m moving from Apple and consumer electronics to Bloom and green energy. It’s an interesting company with an even more interesting product. For those of you who aren’t familiar with Bloom’s fuel cells from the recent news coverage here is the feature that 60 Minutes did on Bloom Energy. If that doesn’t satisfy you there’s a bunch more links on the Bloom Energy news page.

I will look back fondly on the projects I worked on and people I worked with at Apple, but I am very excited to see where this new adventure will lead me.

Wish me luck.

Laguna Seca Trackday

IMG_9839

I took a PTO day and went to Laguna Seca with Alex for a track day of his.

There were a couple cool vintage cars having a day of it and it really made me want to work on my 2002.

Unfortunately, I brought a camera with dead batteries, so I didn’t manage to get a photo of the 2002 Roundie that was there, but I did manage to get one shot of this 66 Mustang before the batteries totally died.

Server Side Dynamic Elements

I am a big fan of Ruby and most of the things that come with it. The exception to this is the overhead generated by dynamic websites built upon it. Server side dynamic content generation is a big issue that I have run up against many, many times. In light of these issues, I had originally planned to do client side parsing of the Twitter, Flickr, and Delicious streams that I integrate into amdavidson.com.

This worked fine until I left the country on a business trip to China and realized that the Great Firewall of China would not block the client side scripts from getting at Twitter, but just let them time out. This led to awful page loading times.

I have been looking to switch amdavidson.com back to Ruby for some time as I don’t much like working with PHP. This gave me a good opportunity for a rewrite and here’s some server side parsing that I worked out.

For Twitter I wanted to pull the JSON data stream and do some basic formatting and linkify the usernames and URLs in any of the tweets. I came up with the following code:

    require 'open-uri'
    require 'json/ext'

    twitter_url = "https://api.twitter.com/1/statuses/user_timeline.json?screen_name=amdavidson"
    response = open(twitter_url, 'User-agent' => 'amdavidson.com').read
    tweets = JSON.parse(response)

    def linkify(text)
      text = text.gsub(/(?i)b((?:[a-z][w-]+:(?:/{1,3}|[a-z0-9%])|wwwd{0,3}[.]|[a-z0-9.-]+[.][a-z]{2,4}/)(?:[^s()<>]+|(([^s()<>]+|(([^s()<>]+)))*))+(?:(([^s()<>]+|(([^s()<>]+)))*)|[^s`!()[]{};:'".,<>?]))/, '<a href="\1">\1</a>')
      text = text.gsub(/@([A-Za-z0-9]*)/, '<a href="http://twitter.com/\1">@\1</a>');
      text
    end


    for t in tweets[0...5] do
  %>

      <div class="tweet tweet-<%= t["id"] %>">
        <a href="http://twitter.com/<%= t["user"]["screen_name"] %>"><img width="48" height="48" src="<%= t["user"]["profile_image_url"]%>" rel="<%= t["user"]["profile_image_url"] %>" /></a>
        <p class="text">
          <span class="username"><a href="http://twitter.com/<%= t["user"]["screen_name"] %>"><%= t["user"]["screen_name"] %></a>:</span>
          <%= linkify(t["text"]) %>
          <% if t["in_reply_to_screen_name"] then %>
            <span class="time"><%= DateTime.parse(t["created_at"]).strftime("%B %e at %l:%m") %> in reply to 
            <a href="http://twitter.com/<%= t["in_reply_to_screen_name"] %>/status/<%= t["in_reply_to_status_id"]%>"><%= t["in_reply_to_screen_name"] %></a></span>
          <% else %>
            <span class="time"><%= DateTime.parse(t["created_at"]).strftime("%B %e at %l:%m") %></span>                           
          <% end %>
        </p>
      </div>

  <% end
  end %>

Breaking that down a little, I pulled the stream using open-uri, then parsed it using JSON.parse, then linkified it using John Gruber’s excellent (and extremely long) url matching regex and a regex of my own design for linkifying the twitter usernames that are mentioned in a tweet. The rest of the code is just formatting.

Here’s a bit simpler code for my 12 most recent Flickr images:

    <%   if flickr_enabled == true then

      require 'open-uri'
      require 'json/ext'

      flickr_url = "http://api.flickr.com/services/rest/?&method=flickr.people.getPublicPhotos&format=json&nojsoncallback=1&api_key=#{ENV['flickr_key']}&user_id=#{ENV['flickr_id']}&per_page=12"
      response = open(flickr_url, 'User-agent' => 'amdavidson.com').read
      photos = JSON.parse(response)["photos"]["photo"]

      for p in photos[0...12] do
        square = "http://farm#{p["farm"]}.static.flickr.com/#{p["server"]}/#{p["id"]}_#{p["secret"]}_t.jpg"
        medium = "http://farm#{p["farm"]}.static.flickr.com/#{p["server"]}/#{p["id"]}_#{p["secret"]}.jpg"
        url = "http://flickr.com/photos/#{p["owner"]}/#{p["id"]}"
    %>

    <a class="preview" href="<%= url %>" rel="<%= medium %>">
      <img class="flickr-img" src="<%= square %>" alt="" />
    </a>

    <%   end
    end %>

And my code for Delicious:

    <%

    if delicious_enabled == true then

      require 'open-uri'
      require 'json/ext'

      url = "http://feeds.delicious.com/v2/json/#{ENV["delicious_name"]}"
      response = open(url, 'User-agent' => 'amdavidson.com').read
      links = JSON.parse(response)

      for l in links[0...5] do      

    %>       
        <li>
          <h2><a href="<%= l["u"] %>" title="<%= l["d"]%>" target="_blank"><%= l["d"]%></a></h2>
          <p><%= l["n"] %></p>
        </li>


      <% end 
      end %>     

None of this code is very light on the server, if you have lighter methods. Please let me know. I would love to lighten the loads, but in the mean time I plan to try to mitigate the load with the Varnish HTTP caching that is built into Heroku.

WordPress XML to Toto

In my efforts to convert my blog at amdavidson.com I wrote a little script to convert the xml file that WordPress can export into text files that toto understands.

It’s extremely hackish and will likely not generate 100% solid data, I had to edit ~10 of my 140 posts. Do not use this on a production system and check your posts before hand.

If you’re still inclined, here’s the gist:

    #!/usr/bin/ruby

    require 'rubygems'
    require 'nokogiri'

    puts 'parsing xml file'
    parsed = Nokogiri::XML(open("./wordpress.2010-10-06.xml"))

    puts 'pulling titles'
    i = 0
    title = Array.new
    parsed.xpath('//item/title').each do |n|
    title[i] = n.text
    i += 1
    end

    puts 'pulling dates'
    i = 0
    date = Array.new
    parsed.xpath('//item/pubDate').each do |n|
    date[i] = n.text
    i += 1
    end

    puts 'pulling content'
    i = 0
    content = Array.new
    parsed.xpath('//item/content:encoded').each do |n|
    content[i] = n.text
    i += 1
    end

    puts 'pulling name'
    i = 0
    name = Array.new
    parsed.xpath('//item/wp:post_name').each do |n|
    name[i] = n.text
    i += 1
    end


    puts 'muxing arrays'
    if title.length == date.length and date.length == content.length  and content.length == name.length then
    posts = [title, date, content, name]
    else 
    puts 'length broken!'
    end

    puts 'printing'
    i = 0
    while i < title.length do
    filename = "articles/" + DateTime.parse(posts[1][i]).strftime("%Y-%m") + "-" + posts[3][i] + ".txt"

    file = File.new(filename, "w")

    # puts "filename: " + filename
    file.puts "title: " + posts[0][i]
    file.puts "date: " + DateTime.parse(posts[1][i]).strftime("%Y/%m/%d")
    file.puts "author: Andrew"
    file.puts "n"
    file.puts "#{posts[2][i]}"

    i += 1
    end

Note that the filenames and directories are hard coded… be sure to update them before running.

Do Not Cross

IMG_0412

Been dealing with a lot of bureaucracy in the last few weeks working with the vendors in China. This photo felt strangely significant.

On a related note, I love the camera in the iPhone 4.

Re-introducing Shorten

After toying a bit with yourls, I went looking for a way to get a URL shortener setup that was a bit less complicated and could be deployed on Heroku (a service that I have recently become totally enamored with). This is what I ended up with.

After a few Google searches I came across this posting by Andrew Pilsch. In the article he bluntly explains how to set up your own URL shortener and provides code for a Sinatra based URL shortener.

This seemed perfect, Sinatra was something that I have also been wanting to tinker with and this seemed like a good place to start.

The codebase provided was a good framework for what I wanted but didn’t quite check all the boxes so I set out modifying it to be deployable to Heroku and to generate random short URLs (a la bit.ly) rather than the sequential ones that it had originally been configured for.

I posted my fork of the code on Github and have a running example at ➼.ws.

Check it out and let me know what you think. If you get it running somewhere else, let me know I’d love to see it.

Configuring PostgreSQL for Ruby on OS X

I found it to be a bit of a bear to get PostgreSQL installed on my OS 10.6 Snow Leopard system. So as I was going along I took notes. Here’s how I managed to get it up and running.

Despite the ease of pushing to Heroku for my projects, I like to have a local development environment. I have never used PostgreSQL before, and want to make sure that there are no issues with bringing it up for a new project I am starting. I first checked what version of PostgreSQL Heroku was running and endeavored to get that running on my local installation.

First, install MacPorts on your machine.

Then install PostgreSQL with the following command:

$ sudo port install postgresql83 postgresql83-server

Configure a default database with the following commands (these will also be listed at the end of
the MacPorts PostgreSQL installation):

$ sudo mkdir -p /opt/local/var/db/postgresql83/defaultdb
$ sudo chown postgres:postgres /opt/local/var/db/postgresql83/defaultdb
$ sudo su postgres -c '/opt/local/lib/postgresql83/bin/initdb -D /opt/local/var/db/postgresql83/defaultdb'

It seems that some users, including myself, have issues with the last command. If you get the error:

shell-init: error retrieving current directory: getcwd: cannot access parent directories: Permission denied
could not identify current directory: Permission denied
could not identify current directory: Permission denied
could not identify current directory: Permission denied
The program "postgres" is needed by initdb but was not found in the
same directory as "initdb".
Check your installation.

Try running this command (from here and here):

$ sudo dscl . -create /Users/postgres UserShell /usr/bin/false
$ sudo dscl . -create /Users/postgres NFSHomeDirectory /opt/local/var/db/postgresql83

After the dust has settled and you have PostgreSQL running, you’ll likely want to get the gem
installed for Ruby to get integrated.

Make sure that /opt/local/lib/postgresql83/bin is in your $PATH. Then
installing do_postgres is as easy as running:

$ gem install do_postgres 

Hope that works for someone.

Switch Google to English Bookmarklet

For those of you who travel internationally like myself, I am certain that you are frustrated when you search Google and find that the results come back in the local, unintelligible language. There are preference settings for this, but for me, they never seem to stick. Here’s my solution.

Here’s a handy bookmarklet that should bring you back to the good old US English Google for search pages. Drag this link to your bookmarks bar and click it whenever you see a Google search page in the wrong language.

Google English

For the curious, here’s the code behind it. It’s a bit simple and probably leads to Google receiving two localization strings, but fortunately they seem to only use the last one in the URL.

javascript:window.location=window.location.toString()+'&hl=en-US';