Geeking Out

Everyone’s a pundit

The rumors about upcoming iPhones and iPads are very straightforward this cycle compared to previous years, when lots of crazy ideas were always bandied about. Let me throw my hat in the ring and see how well I do at crystal ball prognostication, since the stakes are so low!

iPad

There are currently 18 (yes 18!) different models of iPad. This is unsustainable and causing inventory problems. At the same time, demand is so high and the iPad 2 so new that there is unlikely to be a major change in specs this holiday season.

I predict instead that Apple will streamline its models and simplify the iPad line in three ways. First, I predict that the 16GB model will be discontinued, and the 32GB model lowered in price to $499. Second, if the long-rumored “retina” high resolution iPad displays are ready to go, the 64GB model will keep its same price point but be updated with the better display. (If it isn’t ready, that model will simply receive a $100 price reduction). Third, the 3G models will switch to a new combined radio that allows them to work on either Verizon or AT&T (and possibly other carriers) without having to purchase carrier-specific models. These changes will decrease the number of models from 18 to 8 (32 and 64 GB, wifi and 3G, black and white), and hopefully relieve inventory problems.

iPhone

A new iPhone 5 (or iPhone 4S, although I predict “5” to match up with the new iOS 5) is certain. It will probably contain only minor feature improvements, such as a higher resolution rear-facing camera or longer battery life, since the iPhone 4 last year introduced a major design refresh. The wildcard is what will happen on the “low-end”, where the iPod touch and iPhone may start converging more. I wonder if the time has come for a 3G iPod touch similar to the iPad, or a lower-end iPhone that is “pay as you go”. I suspect that either way the traditional hard-drive based iPod will finally be discontinued.

Geeking Out

This mouse is magical

I’m finally using my new Magic Mouse from Apple, which is just like normal mouse, except it does multi-touch actions. Which may seem, at first glance, sort of stupid. I’m not sure multi-touch gesturing is going to reach its full potential when confined to a mouse form factor, but it is certainly an innovative idea. And it doesn’t have a little scroll ball to get gunked up like the old Mighty Mouse, so that’s a plus.

The built-in actions are pretty limited — scrolling now has inertia, like the iPhone, and you can single or double click despite the lack of actual buttons. The real fun comes when you download a little app called MagicPrefs, which is a must-have companion to the Magic Mouse. The possibilities it opens up are breathtaking and somewhat ridiculous. It creates a dozen or so tap zones on the mouse and allows one to assign actions for clicks, swipes, drags, pinches, and taps, for up to four fingers. It would take some serious dexterity to use this program to its full potential.

My current configuration is pretty simple, but really great for my needs. Swiping down with two fingers brings up Spaces, swiping up with two fingers brings up Expose, and clicking the little Apple icon locks my screen. There is another option called the “MagicMenu” that allows you to tap or click and then swipe up, down, left, or right to select an action from a little hovering menu that appears. A little too finicky and complicated for me, but neat.

The best thing that MagicPrefs seems to do is fix — or at least lessen — the strange scroll scaling that the Magic Mouse uses, which makes it far too easy to move the mouse pointer nowhere (if moving the mouse slowly), or all the way to the other side of the screen (if moving quickly) with a tiny wrist flick. I know some people love that sort of “scaled” scrolling action, but I can’t stand it. I’m not sure how much is MagicPrefs fiddling the settings and how much is just me getting used to the odd behavior, but either way, this little mouse, full of multi-touch mystery, is definitely growing on me.

Geeking Out

Cleaning Technological House

The end of the year for me is traditionally a time to tidy my accumulated digital detritus. As part of that process, I’ve migrated AgBlog to a new server, set everything back up from scratch, and re-implemented all of my customized functionality and design in a much cleaner, more sustainable form. In the process, this blog has picked up some neat new functionality, including better display of photo galleries and automatic loading of additional posts when you reach the bottom of a page.

Enjoy, and let me know if you have any problems. I suspect I’ll have more to say in terms of new content very soon.

Geeking Out

Staying Agile

I have recently been evaluating the new version of a product called 1Password, a password saver and form filler for the Mac that automates filling web forms, entering credit card information, and logging into web sites. The built-in Apple Keychain does this fairly well, once you figure it out, but 1Password provides a lot more flexibility and additional features.

My biggest concern about 1Password’s new version is that it no longer uses the Apple Keychain as its secure backend storage system. There are inherit dangers with creating your own approach to secure encrypted storage, and using well-understood, widely-deployed solutions is generally the best approach. That said, I’m happy to discover that Agile has written up documents explaining why they abandoned the Apple Keychain and how the Agile Keychain was developed. Now I just have to decide if their arguments are persuasive enough to justify the switch.

Geeking Out

Digital Kindling

I’ve been using an Amazon Kindle for a few days, and had occasion a couple months ago to use a newer Kindle 2 for a few days as well. The device is wonderful and terrible all at once. I enjoy using it immensely, except for how painful it is. It is first electronic device I have felt truly conflicted about.

Amazon Kindle (First Generation)The Kindle is an electronic reading device the size of a small hardback and half the thickness. It is a mess of plastic edges and buttons, with a little keyboard across the bottom composed of chicklet sized keys, big silly page-turning buttons on the sides, and no way to really hold it comfortably. In the middle is a moderately sized “e-ink” display that provides a high-contrast reading surface similar to ink on paper. The newer Kindle 2 is a bit thinner, much more ergonomic, and uses a better interface for navigating through content, but is otherwise quite similar.

Because the device does not use a conventional display, its battery can last for a few weeks. Because it has built-in Sprint wireless, it can sync and download books automatically. That’s a neat trick.

All Kindle content purchased through Amazon (and it can only be purchased through Amazon) is protected by extremely onerous copy preventing measures. One does not “buy” a book on the Kindle, but rather buys a “license.” Books on the Kindle cannot be shared, loaned, resold, or returned. And Amazon can “revoke” a license for any reason, wiping the book from your Kindle without prior notification or consent. The thought of buying anything to put on the Kindle sickens me, because it feels like something straight out of 1984.

At the same time, the allure of instant “buy it now” satisfaction, and the fact that the DRM restrictions have already been broken does provide some comfort. Not a lot, but I could see myself in a moment of weakness, or just prior to a long trip, breaking down and clicking the buy button. And then promptly removing the DRM, of course.

I’ve loaded my Kindle with a dozen free books. Most of them are old and out of copyright, provided courtesy of Project Gutenberg, a service that digitizes old books. One book my mom purchased, one was a promotional offer. The purchased books, in general, are formatted a bit better, but on the whole the experience is pretty disappointing. Everything is displayed in one font. The kerning is fine, but I wish I could adjust the line spacing. Instead of pages Amazon uses some sort of strange sectioning system, so that if the font is scaled larger or smaller, your place stays the same. The device has no backlight, so you need a book light (how old fashioned!) to read it in the dark. And as I’ve already said, the device is very difficult to hold comfortably, even when mounted in its provided leather cover. Although the Kindle 2 is a lot better in this regard.

I’ve been throwing the Kindle in my bag and taking it everywhere I go. I keep finding myself reading. At work, during lunchtime. On the T. Around the house. When I’m waiting for something or someone. It is nice to always have some books present, and to be able to effortlessly and instantly switch between them. It is nice that my “book” is of a standard size, no matter the length of the text.

I bought an iPhone application a while ago called “Classics” that contains nicely formatted texts of several classic books that are in the public domain. Despite the iPhone’s small screen, the reading experience is not unduly painful, and I’ve used my phone to read Gulliver’s Travels and The Jungle Book. On the Kindle I’m currently reading The Island of Doctor Moreau. There is a lot of good, free stuff out there. I could keep doing this for a while. And for classics that Amazon has formatted and added to their Kindle store (purchasable for $0.00), the place that I stop reading on the Kindle will actually synchronize automatically with the Kindle application for the iPhone. Sadly, that doesn’t work for books I load onto the Kindle through means other than Amazon.

The end result is, I’m really enjoying my little electronic reading device, despite all its flaws. I wouldn’t pay $300 for it, but I didn’t have to, because my mom never used it and was persuaded to give it to me. But what I have decided to do, in my typically silliness, is eBay it and apply the profits to a Kindle 2. Sorry, Mom. The Kindle 2 doesn’t solve all of my complaints, especially the most important one, about the DRM, but it does make the experience somewhat more pleasant, and I think the $100 or so upgrade price is worth it.

I can’t recommend that anyone go out and buy a Kindle. I can’t get behind what Amazon is doing with their Kindle store and their draconian restrictions, although I can hope that things will improve with time. What I can say is that I think technological progress, the marketplace, and consumer opinion has finally converged to the point where this sort of device is feasible, practical, and in some cases desirable. So we’ve made some progress.

Geeking Out

Using Capistrano to deploy Django web apps

These last few weeks I’ve been working on an outside project that is written in Django (thanks to the involvement of one of the two coders behind Polihood). When it came time to deploy this app to our dev server, we started looking at the Capistrano deployment tool. Unfortunately the documentation for Capistrano is lacking, but the tool itself is darn slick, so I gave it a go.

Capistrano was built as a deployment manager for Ruby on Rails applications, but it has been expanded with additional functionality, and seems to be slowly moving towards being a general-purpose tool. I’ve seen other tutorials written about using Capistrano to deploy web apps that aren’t Rails, but generally they consist of sticking a bunch of shell commands into a Capfile and letting it run, which doesn’t really seem to be the “Capistrano way.”

What I’ve done is use Capistrano’s built in Rails deploy functionality and have been writing overrides as I find that I need them. Right now the script only does a basic deploy or rollback, but eventually I’ll probably extend it to do other things as well.

Remarkably, very little needs to change in the standard deploy library to work with Django. Here is my Capfile as it currently stands.

Geeking Out

Using PHP in Harvard FAS’s environment

FAS IT’s web site helpfully points out that their servers support PHP “in cgi-mode,” but does not explain what that means. What follows are three fairly straightforward steps for setting up PHP in your FAS web space. This assumes some basic UNIX knowledge. If you lack it, find a friend to help you.

The method described here may not be officially supported (or appreciated) by FAS IT. Since they don’t provide documentation, who knows. Exercise due caution.
  1. SSH into fas.harvard.edu and create a public_html directory if you don’t already have one. Make sure it has permissions 755. Also make sure your home directory is world executable.
  2. Copy the PHP binary into your public_html directory (note: this file is nearly 5MB in size and will count towards your quota). Since it is stored on the web servers but not the shell servers, you need to do this by creating a CGI script and visiting it in your web browser. This one does the trick:
    #!/bin/sh
    cp `which php` php.cgi && chmod 755 php.cgi

    After using it once, delete it. You should now have a php.cgi executable in your public_html directory.

  3. Create an .htaccess file to tell the web server how to server your PHP files. It should look like this:
    Options +ExecCGI
    AddHandler application/x-httpd-php .php
    Action application/x-httpd-php /~youruser/php.cgi

Now create PHP files as you normally would and they should work fine without any special permissions or modifications. Tada!

Geeking Out

Harvard deals with GSAS hack fall-out

They’re gonna announce the details at CoB. I still think its a simple Joomla vuln. See previous post. Also, a decent bit of coverage from some no-name web site and the coverage in the Harvard Crimson. Last couple of weeks people were working overtime doing Nessus scans and the like. Here’s what I got this morning:

Subject: Important Notice — SECURITY ALERT

*** Important Notice — Heightened Security Alert for Harvard Managers
***

We expect the GSAS to announce details later today on the hacking
incident involving one of their web servers. This announcement will
likely attract attention both within Harvard and beyond. We are
concerned that hacking attempts may increase following this kind of
publicity and therefore write to suggest that you all be on a heightened
alert status over the next week.

This incident will also likely raise many questions about security
practices and solutions so one should anticipate a spike in inquiries.

Please let me know if you have questions.

Berkman used to have some fairly decent security monitoring, but in the last couple years its been loosened a bit for flexibility — keeping those things running reliably and with an acceptable level of false positives in a constantly changing environment is difficult. Which just shows you, in any organization with many competing priorities and limited resources, convenience will win out over security the second you turn your back. The best security strategy is one with many levels of protection. Harvard UIS does some sophisticated border analysis, and organizations like FAS are waking up to the need for additional proactive intrusion *testing* in addition to monitoring. With all of these layers, the success of any individual attack is dramatically lessened, but never eliminated, especially in a large, disparate, and sprawling organization like Harvard.

Geeking Out

Super fun with Intercontinential’s Superclick

I’m staying at the new Intercontinential San Francisco and partaking of their pay internet services. In the process of using said services, I’ve discovered that they are intercepting and recording every web page I visit. Does that strike anyone else as odd?

ichg_wireless.pngI first noticed that between each page view I would get a little white flash. Then when I went to the New York Times, I discovered that I couldn’t click through to the second page of articles — I would just get redirected back to the first page.

A bit of investigation revealed that Intercontinental Hotel Group is using something called Superclick (and if you don’t believe me, just look at the first testimonial on their homepage). Every time you put in a URL request, the Superclick service uses a transparent proxy to grab the request and redirect it to their own page. Then, from there they do some sort of checking and then perform an HTTP redirect to the actual destination.

ichg_superproxy_1.png

Here is a session:

[zeno@viper ~]$ curl -v http://www.google.com/
* About to connect() to www.google.com port 80 (#0)
*   Trying 209.85.173.104... connected
* Connected to www.google.com (209.85.173.104) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.16.3 (powerpc-apple-darwin9.0) libcurl/7.16.3 OpenSSL/0.9.7l zlib/1.2.3
> Host: www.google.com
> Accept: */*
> 
< HTTP/1.0 302 Moved Temporarily
< Server: squid/2.5.STABLE14
< Date: Sat, 01 Mar 2008 07:46:02 GMT
< Content-Length: 0
< Location: http://12.35.79.2/superclick/popup.php?popup=6&url=http%3A%2F%2Fwww.google.com%2F
< 
* Closing connection #0

So on every single request they are intercepting the destination, redirecting it, doing something, and then sending you on your way. And of course this breaks some things that use GET and POST variables, although I haven't tested it extensively (and don't really want to). Instead, I setup an SSH tunnel to a tinyproxy server, and told my OS to forward web requests through there. It's working a lot better and I feel a lot safer.

ichg_superproxy_2.png

Speaking of safety, I can confirm that this 2005 XSS vulnerability is still present in the Superclick code.

Well, at least their speeds are pretty good. I'm getting 700KB/s on an scp download from Harvard.

Update: In the morning Superclick made me re-login, giving me the opportunity to read their ToS and AUP. Nothing very controversial in there. Using a proxy doesn't violate the terms, and the only vaguely odd terms are those that say you can't "attach[] an excessively long signature to your [email] message," which is just strange, and the one that disallows "forging the headers of your email message in any way," which is over-broad.

Geeking Out

Software Notes (Mac Edition)

Soon after Alcor released the source code, Ankur Kothari cleaned up Quicksilver, improved the memory footprint, and fixed a bunch of problems and bugs. Download it here.

The 1.4 beta of iNdependence was recently released. It allows for (nearly) one-click jailbreaking and unlocking of iPhones running 1.1.3 software. I tried it this weekend and it worked great and was pretty easy.

Things is a neat and full-featured todo manager app. It’s sorta pricey, though, and doesn’t talk to iCal, plus I was having some trouble getting my head around how I’m supposed to use it to be more efficient. Now I’m giving Anxiety a shot. It is cute, simple, free, and integrates with iCal and Apple Mail using Leopard’s built in todo support.

Geeking Out

My new toy (2)

VMWare Server Too bad the inaugural virtual machine is going to be running SCO. Oh well. It’s great that these days you can buy one moderately priced, very powerful machine and use it to run all the services you need for a medium-sized business in a secure and stable way, and then later expand to add capabilities like high availability. This isn’t news to people up to speed on virtualization technologies, but I’m still easing into the awesomeness.

At Berkman we’ve been using VMWare ESX for some time for Windows things, but are now deploying Xen for our production Linux servers. If Maintex didn’t require a legacy SCO system, we could have saved a few thousand bucks, but oh well. VMWare is pretty darn slick regardless.

Geeking Out

How do you preserve video files?

For the time being I’m talking about movies, mostly on DVD, although at some point I need to digitize old Bar Mitzvah and wedding videos and school plays. But right now my problem is trying to turn my DVD collection into an on-demand video library, and I can’t figure out a good way to do it — H.264 is the best video quality currently available, but the amount of time it takes to compress is insane. The baseline profile works with my Apple TV but the high profile would be a better bet for the future. If I rip in full quality my iPhone can’t play it back, because it is limited to 640×480 and can’t figure out anamorphic. And what should be done about the DVD extras? Why can’t current software handle supplemental audio tracks? And of course Apple TVs and iPhones can’t play them back even if they could be included.

The only solution I can come up with is ripping full DVDs and figuring out how to compress (or re-compress) later. But they take up a huge amount of space and *still* can’t be played back on any of my devices. Stupid.

Geeking Out

NetApp, Bacula, NFS

This is a short one. On Friday I wrestled for a while with my new backup solution using Bacula. I’m trying to write the backups to file on a daily basis and then migrate the weeklies to tape. This should make our backups a lot faster and more reliable, with any luck. The file store is Glory, one of our NetApps, and I couldn’t for the life of me figure out why Bacula refused to write files of larger than 2GB in size. This is important I believe (although I’m not yet entirely sure) because in order to migrate the tapes I need the volumes to be of the same size.

Anyway, as one might guess, the problem was with NFS version 2. Standard NFS v2 doesn’t support file sizes > 2GB, but I thought we were running v3 so it took me an exceptionally long time to discover this. Try mounting with -o vers=3 option, and if you get a strange failure, it means your NetApp is not setup to support v3. In FilerView go to NFS configuration and enable v3, but you’ll find the same problem. While the web interface claims the change has been made, in reality NFS is not automatically restarted and needs to be. I wasn’t sure how to do this over the web but SSH to the rescue and a simple nfs off and nfs on saved the day.

Geeking Out

My new toy

ReadyNas NV+I’m getting a bit low on hard drive space and had a very strong desire to replace my large general-purpose Linux server with a smaller dedicated appliance (it was an irrational desire, but these things happen and you just have to go with them). After much agonizing I picked up an Infrant ReadyNAS NV+ and decked it out with four 500GB hard drives and 1GB RAM. I chose the ReadyNAS over the competition because it got good reviews, is fairly powerful and customizable, you can provide your own hard drives and RAM, and instead of being a scary black box, is actually a hackable Linux-based device.

So far everything is working great. It took several hours to build the 1.5 TB RAID, after which I setup my shares, my access controls, and my snapshots, and now data is rsyncing over from my old server now using the ReadyNAS’s built in rsync server (nice!). Only problems so far are that its a bit noisy compared to my nearly silent tower, and things are moving awfully slowly on the copying…I can get a good 10-12MB/s over gigabit when talking from server to laptop, but ReadyNAS to server is getting closer to 2.5MB/s. Which is pretty much unacceptable, but doesn’t square with published test results, so let’s hope it is just an anomaly.

Geeking Out

Built-in speakers stopped working on a MacBook or MacBook Pro

Today, utterly randomly and with no provocation, the built-in speaker output disappeared from my System Preferences on my MacBook Pro and the speakers would no longer work, except for the boot-up chime. All that I could get was digital audio out and the red light coming out of the speaker port. Searching online turned up nothing until the right combination of words led me to this Apple support message board thread. While Apple does not have a solution for this problem and tries replacing the board with the audio port, you can solve the problem by sticking a toothpick or a paperclip into the slot (preferably while the red digital audio light is on) and at around the 5-o-clock position there is some sort of switch or rocker that needs to be gently pushed. There will be a bit of a click. After a few seconds your audio will start working again.

Geeking Out

RAID and Filers 101

I’m probably displaying my ignorance here, but so be it. A standard 3u NetApp disk shelf contains 15 sleds. NetApp’s recommended size for an aggregate in normal (basic, non-fancy) circumstances is 14 disks. That means 12 data disks, 2 for parity, and one hot spare in the shelf. To me this feels both rock solid reliable and fairly inefficient for any scenarios with a large number of writes. It means every time you have a write you need to read from *12 disks twice* in order to get your parity data. Wikipedia suggests that something called WAFL deals with this problem by “coalescing writes in fewer stripes.” I don’t really know what that means and would love to be edumacated…

Geeking Out

Virtual thinking

I’ve been reading a bit over the last couple days about virtualization solutions for servers, specifically VMWare’s Server product. These sort of solutions allow you to run multiple “virtual” machines on one actual physical server, which in some cases means you can take better advantage of under-utilized machines while at the same time creating completely seperate environments (even running different operating systems) that will not conflict with each other.

What strikes me in reading over the VMWare blogs is that the people who seem to be having the most success with the server, in terms of consolidating from 10 machines to 1, or whatever, are those who have large Windows Server installations. They have taken to heart the idea of “one application per server,” which is pretty much what it sounds like — each major application, be it your accounting system, your virus scanner, your mail server, whatever, sits on its own server, completely seperate from everything else. This helps keep things secure and stable, but the trade-off is that you have to get a *lot* of servers, many of which are generally underutilized. Replacing them all with one powerful box and VMWare makes a lot of sense in that case.

In our case, however, we have several machines that *are* well-utilized, and so the propsect of virtualizing, say, a heavily loaded web server or an important database server does not really appeal to me. Thus the inflated performance numbers you might find in the Windows world are somewhat dulled in the Linux world where, I think, its a bit more common to have a few or several different services running per machine, in a fairly secure and stable fashion (or, in the case of our web server, running things in “jails” so that they can’t talk to each other, a much more lightweight form of virtualization that is fairly effective in certain instances).

Still, I’m intrigued, and I’d like to investigate this further, but I don’t really have any machines to spare right now on this sort of thing. It doesn’t help that I can’t find much in the way of peoples experiences with VMWare Server in Linux or UNIX environments, not to mention the complete lack of published benchmarks. I’ll keep looking, and if anyone has any suggestions, do pass them along.