Archive for May, 2009


How to use variable variables in PHP

One of the biggest time-savers in PHP is the ability to use variable variables.  While often intimidating for newcomers to PHP, variable variables are extremely powerful once you get the hang of them.

Variable variables are just variables whose names can be programatically set and accessed.  For example, the code below creates a variable called $hello and outputs the string “world”.  The double dollar sign declares that the value of $a should be used as the name of newly defined variable.

<?php
$a = 'hello';
$$a = 'world'
echo $hello;
?>

When I started with PHP about 10 years ago, everyone was still using global variables.  That meant that anything you passed as a GET variable could be used as a local variable.  It was very convenient, but unfortunately not very secure.  For me, typing $HTTP_GET_VARS[‘count’] just wasn’t as fun as being able to use $count.  I found myself adding long declaration lists to the top of my files that did nothing but convert my GET/POST variables to local variables.  My code started to look like this:

<?php
$salutation = $HTTP_GET_VARS['salutation'];
$fname = $HTTP_GET_VARS['fname'];
$lname = $HTTP_GET_VARS['lname'];
$email = $HTTP_GET_VARS['email'];
...
?>

Do that for a couple dozen variables and you’ll start telling yourself there has to be a better way.  Nowadays you can use $_GET instead of $HTTP_GET_VARS, but the better solution is to use variable variables. Now my code looks more like this:

<?php
// create an array of all the GET/POST variables you want to use
$fields = array('salutation','fname','lname','email','company','job_title','addr1','addr2','city','state',
                'zip','country','phone','work_phone');

// convert each REQUEST variable (GET, POST or COOKIE) to a local variable
foreach($fields as $field)
    ${$field} = sanitize($_REQUEST[$field]);
?>

This has several benefits.  I reduced 14 lines of code down to 3.  I now have one place to sanitize all my external input. And if I ever decide to change a variable name, I have one less place in my code to fix.

This benefit of this technique increases as you use the $fields array throughout your code.  I now utilize the $fields array when saving my form data to the database.  I use it for loading existing user values from the database.  I use it for passing my form fields back to smarty:

<?php
$form = array();
foreach($fields as $field)
    $form[] = $_REQUEST[$field];
$smarty->assign('form',$form);
?>

Variable variables have become one of my favorite features of PHP. They’ve allowed me to tighten up a lot of my code and made it a lot more maintainable.

Have you done anything cool with variable variables?  What other PHP tricks have revolutionized the way you write code?

 21 comments

The protocols powering the real-time web

In the past few weeks there has been a lot of discussion around the rise of the real-time web, including posts from TechCrunch, GigaOm, ReadWriteWeb and Scoble.   A lot of the talk has been around Twitter, Facebook, Friendfeed, OneRiot and of course Google.  You don’t have to be a genius to figure out that real-time is the future of the web.  I believe there is a huge need for the tech community to develop new protocols that will power this fundamental shift in how web apps work.

The problem is our existing protocols are request driven instead of event driven.  The web we know and love wasn’t built with real-time in mind.

Tim O’Reilly sent a tweet from OSCON08 that really captures the essence of the polling problem:

On monday friendfeed polled flickr nearly 3 million times for 45000 users, only 6K of whom were logged in. Architectural mismatch. #oscon08

At EventVue we have a dedicated server that does little more than poll for new blog posts from attendees.  We have a few tricks to reduce the pain, but we’re still polling thousands of blogs every hour even though 99% of them haven’t added any fresh content since the last time we checked.  With blog posts, people are used to having a small delay before they show up in Google Reader or other services.  We’re not so forgiving when it takes 30 minutes for a tweet to show up in a client application, even though getting real-time data from twitter using polling is virtually impossible.

So what is the solution?

Some people have said that XMPP holds the answer, but how many developers do you know who have set up an XMPP server before?  Right.  Me too.  XMPP may be a viable transport method but I think we’d be better off using something that is simpler and more familiar to developers.

Another prominent response to the polling problem is the Simple Update Protocol (SUP) that was proposed by Paul Buchheit from Friendfeed.  SUP is certainly an improvement over our current protocols, but what frustrates me is that it only reduces polling instead of eliminating it altogether.  It may make sense for FriendFeed, but it’s not something I would add to my blog.

My favorite approach is PubSubHubbub that was proposed by Brad Fitzpatrick and Brett Slatkin from Google.  PubSubHubbub might have a horrible name, but the protocol is exactly what we need to fix our polling problems.  It’s lightweight, simple to understand and built on top of basic HTTP.

PubSubHubbub is a simple extension to ATOM that uses webhook callbacks to deliver practically instant notifications between servers when a feed is updated.  The protocol is decentralized and free.  Anyone can run a hub.  Anyone can be a publisher or a subscriber.  I like that it eliminates polling altogether and is incredibly simple to implement.  I took a stab at writing the PHP client library and was able to take it from protocol spec to code in less than 2 hours.

If you’re interested, you can check out my PubSubHubbub PHP library and download and install the PubSubHubbub WordPress plugin I wrote as well.

It’s worth mentioning the role that Gnip plays in all of this.  Gnip has been leading the charge against the evils of polling.  I’ve been a big fan of their service and have written before how they helped EventVue.   But at the end of the day, the winning technology shouldn’t be in the hands of one company — it should be open and distributed.   Open protocols don’t eliminate the need for Gnip.  Trusted hubs like Gnip will play an important role in handling the flow of data between publishers and subscribers.  Companies will pay good money to off-load that work, and Gnip is already at the center of that opportunity.  I’d love to see Gnip embrace the open protocols that are being developed and lead the drive for adoption of PubSubHubbub in particular.

I’m excited about PubSubHubbub for a few reasons.  First, it opens the door for a whole new range of real-time applications that simply aren’t possible today.  It’s also a chance for me to contribute to solving a really big problem and an opportunity for me to get in on the ground level of something I believe is going to be huge.  I wasn’t able to contribute to the design of HTTP or sit in on the conversations that led to the development of the RSS protocol.  But one day I’m going to be able to brag that Online Aspect was the very first blog on the web to support PubSubHubbub.   And for a geek like me, that’s pretty cool.

 28 comments

How to speed up your website

There are few things as frustrating as having to wait for a website to load.  Not only do slow websites make for a poor user experience, they can also have a big impact on your bottom line:

  • Google discovered that adding 500ms to their load time resulted in a 20% loss in page views.
  • Amazon discovered that every 100ms they added resulted in a 1% loss of conversions.

Steve Souders is the main thought leader on how to make websites splitting fast.  Steve works at Google, but before that he worked at Yahoo on an extremely useful project called YSlow.   I have used a lot of his research to speed up my own projects and he’s taught me a lot of simple things that can make a big difference in web performance.

Steve recently taught a class at Stanford on high performance websites and the videos are available online.   The full set costs $600, but you can watch the first 3 for free.  I would recommend anyone building stuff on the web to take the time to watch and learn.

Update 06/07/09: Google recently announced their own version of YSlow called PageSpeed.  It’s got some additional features that YSlow doesn’t have — like giving you optizimed images that can be saved directly from the plugin.  Check it out.

 2 comments

Verifying domain name ownership

I got a nice shout-out on TechCrunch today for discovering an issue with the new Kindle Publisher program.  The vulnerability allowed anyone to claim a blog as their own and take advantage of the 30% rev-share that Amazon offers on their $1.99 subscription fee.  Erick Schonfeld did a nice job covering the issue and explaining the implications of the hack.  You can read about it on the TechCrunch article.

The interesting thing about this vulnerability is that there are already accepted methods in place for verifying that someone owns a domain name.  I understand that Amazon may have wanted to remove the friction from getting people started, but this stuff matters too much to get wrong — especially when there is a large audience and money to be gained.

For those who are interested in the best way to do domain name ownership (ahem, Amazon) Google would be a great role model for you to follow.  There is a nice explanation on how Google’s domain verification process works on their help pages:

To verify that you own a site, you can either add a meta tag to your home page (proving that you have access to the source files), or upload an HTML file with the name you specify to your server (proving that you have access to the server).

Each verification method has its advantages. Verifying using a meta tag is ideal if you aren’t able to upload a file to your server. If you have direct access to your server, you may find it easier and faster to upload an HTML file.

Amazon would do well to follow Google’s lead.

 8 comments