Skip to content

Drupal

List of Front Range Software Networking Events and Conferences

Updated March 21: crossed out ‘conferences’ because I don’t do a good job of listing those.
Boulder, Colorado, has a great tech scene, that I’ve been a peripheral member of for a while now.  I thought I’d share a few of the places I go to network.  And by “network”, I mean learn about cool new technologies, get a feel for the state of the scene (are companies hiring?  Firing?  What technologies are in high demand?) and chat with interesting people.  All of the events below focus on software, except where noted.

NB: I have not found work through any of these events.  But if I needed work, these communities are the second place I’d look.  (The first place would be my personal network.)

Boulder Denver New Tech Meetup

  • 5 minute presentions.  Two times a month.  Audience varies wildly from hard core developers to marketing folks to graphic designers to upper level execs.  Focus is on new technologies and companies.  Arrive early, because once the presentations start, it’s hard to talk to people.
  • Good for: energy, free food, broad overviews, regular meetings, reminding you of the glory days in 1999.
  • Bad for: diving deep into a subject, expanding your technical knowledge

User groups: Boulder Java Users Group, Boulder Linux Users Group, Rocky Mountain Adobe Users Group, Denver/Boulder Drupal Users Group, Denver Java Users Group others updated 11/12 8:51: added Denver JUG

  • Typically one or two presentations each meeting, for an hour or two.  Tend to focus on a specific technology, as indicated by the names.  Sometimes food is provided.
  • Good for: diving deep into a technology, networking amongst fellow nerds, regular meetings
  • Bad for: anyone not interested in what they’re presenting that night, non technical folks

Meetups (of which BDNT, covered above, is one)

  • There’s a meetup for everything under the sun.  Well, almost.  If you’re looking to focus on a particular subject, consider starting one (not free) or joining one–typically free.
  • Good for: breadth of possibility–you want to talk about Google?  How about SecondLife?
  • Bad for: many are kind of small

Startup Drinks

  • Get together in a bar and mingle. Talk about your startups dreams or realities.
  • Good: have a beer, talk tech–what’s not to like?, takes place after working hours, casual
  • Bad: hard to target who to talk to, intermittent, takes place after working hours.

BarCamp

  • Originally started, I believe, in response to FooCamp, this is an unconference. On Friday attendees get together and assemble an interim conference schedule.  On Saturday, they present, in about an hour or so.  Some slots are group activities (“let’s talk about technology X”) rather than presentations.  Very free form.
  • Good: for meeting people interested in technologies, can be relatively deep introduction to a technology
  • Bad: if you need lots of structure, if you want a goodie bag from a conference, presentations can be uneven in quality, hasn’t been one in a while around here (that I know of)

Ignite

  • Presentations on a variety of topics, some geeky, some not.  Presentations determined by vote.  Presentations are 20 slide and 5 minutes total.  Costs something (~$10).
  • Good: happens in several cities (Denver, Boulder, Fort Collins) so gives you chance to meet folks in your community, presentations tend to be funny, wide range of audience
  • Bad: skim surface of topic, presentation quality can vary significantly, not a lot of time to talk to people as you’re mostly watching presentations

CU Computer Science colloquia

  • Run by the CU CS department, these are technical presentations.  Usually given by a visiting PhD.
  • Good: Good to see what is coming down the pike, deep exposure to topics you might never think about (“Effective and Ubiquitous Access for Blind People”, “Optimal-Rate Routing in Adversarial Networks”)
  • Bad: The ones I’ve been to had no professionals there that I could see, happen during the middle of the work day, deep exposure to topics you might not care about

Jelly

  • Cooperative work environments, hosted at a coffee shop or location.
  • Good: informal, could be plenty of time to talk to peers
  • Bad: not sure I’ve ever heard of one happening on the front range, not that different from going to your local coffee shop

Boulder Open Coffee Club

  • From the website: it “encourage entrepreneurs, developers and investors to organize real-world informal meetups”.  I don’t have enough data to give you good/bad points.

Startup Weekend

  • BarCamp with a focus–build a startup company.  With whoever shows up.
  • Good: focus, interesting people, you know they’re entrepeneurial to give a up a weekend to attend, broad cross section of skills
  • Bad: you give up a weekend to attend

Refresh Denver

  • Another group that leverages meetup.com, these folks are in Denver.  Focus on web developers and designers.  Again, I don’t have enough to give good/bad points.

Except for Ignite, everything above is free or donation-based.  The paid conferences around Colorado that I know about, I’ll cover in a future post.

What am I missing?  I know the list is skewed towards Boulder–I haven’t really been to conferences more than an hours drive from Boulder.

Do you use these events as a chance to network?  Catch up with friends?  Learn about new technologies, processes and companies?

Tips: Deploying a web application to the cloud

I am wrapping up helping a client with a build out of a drupal site to ec2. The site itself is a pretty standard CMS implementation–custom content types, etc. The site is an extension to an existing brand, and exists to collect email addresses and send out email newsletters. It was a team of three technical people (there were some designers and other folks involved, but I was pretty much insulated from them by my client) and I was lucky enough to do a lot of the infrastructure work, which is where a lot of the challenge, exploration and experimentation was.

The biggest attraction of the cloud was the ability to spin up and spin down extra servers as the expected traffic on the site increased or decreased. We choose Amazon’s EC2 for hosting. They seem a bit like the IBM of the cloud–no one ever got fired, etc. They have a rich set of offerings and great documentation.

Below are some lessons I learned from this project about EC2. While it was a drupal project, I believe many of these lessons are applicable to anyone who is building a similar system in the cloud. If you are building an video processing super computer, maybe not so much.

Fork your AMI

Amazon EC2 running instances are instantiations of a machine image (AMI). Anyone can create a machine image and make it available for others to use. If you start an instance off an image, and then the owner of the image deletes the image (or otherwise removes it), your instance continues to run happily, but, if you ever need to spin up a second instance off the same AMI, you can’t. In this case, we were leveraging some of the work done by Chapter Three called Project Mercury. This was an evolving project that released several times while we were developing with it. Each time, there was a bit of suspense to see if what we’d done on top of it worked with the new release.

This was suboptimal, of course, but the solution is easy. Once you find an AMI that works, you can start up an instance, and then create your own AMI from the running instance. Then, you use that AMI as a foundation for all your instances. You can control your upgrade cycle. Unless you are running against a very generic AMI that is unlikely to go away, forking is highly recommended.

Use Capistrano

For remote deployment, I haven’t seen or heard of anything that compares to Capistrano. Even if you do have to learn a new scripting language (Ruby), the power you get from ‘cap’ is fantastic. There’s pretty good EC2 integration, though you’ll want to have the EC2 response XML documentation close by when you’re trying to parse responses. There’s also some hassle involved in getting cap to run on EC2. Mostly it involves making sure the right set of ssh keys is in the correct place. But once you’ve got it up and running, you’ll be happy. Trust me.

There’s also a direct capistrano/EC2 integration project, but I didn’t use that. It might be worth a look too.

Use EBS

If you are doing any kind of database driven website, there’s really no substitute for persistent storage. Amazon’s Elastic Block Storage (EBS) is relatively cheap. Here’s an article explaining setting up MySQL on EBS. I do have a friend who is using EC2 in a different manner that is very write intensive, that is having some performance issues with his database on EBS, but for a write seldom, read often website, like this one, EBS seems plenty fast.

EC2 Persistence

Some of the reasons to use Capistrano are that it forces you to script everything, and makes it easy to keep everything in version control. The primary reason to do that is that EC2 instances aren’t guaranteed to be persistent. While there is an SLA around overall EC2 availability, individual instances don’t have any such assurances. That’s why you should use EBS. But, surprisingly, the EC2 instances that we are using for the website haven’t bounced at all. I’m not sure what I was expecting, but they (between three and eight instances) have been up and running for over 30 days, and we haven’t seen a single failure.

Use ElasticFox

This is a FireFox extension that lets you do every workaday task, and almost every conceivable operation, to your EC2 instances. Don’t delay, use this today.

Consider CloudFront

For distributed images, CloudFront is a natural fit. Each instance can then reference the image, without you needing to sync files across instances. You could use this for other files as well.

Use Internal Network Addressing where possible

When you start an EC2 instance, Amazon assigns it two IP addresses–an external name that can be used to access it from the internet, and an internal name. For most contexts, the external name is more useful, but when you are communicating within the cloud (pushing files around, or a database connection), prefer the internal DNS. It looks like there are some performance benefits, but there are definitely pricing benefits. “Always use the internal address when you are communicating between Amazon EC2 instances. This ensures that your network traffic follows the highest bandwidth, lowest cost, and lowest latency path through our network.” We actually used the internal DNS, but it makes more sense to use the IP address, as you don’t get any abstraction benefits from the internal DNS, which you don’t control–that takes a bit of mental adjustment for me.

Consider reserved instances

If you are planning to use Amazon for hosting, make sure you explore reserved instance pricing. For an upfront cost, you get significant savings on your runtime costs.

On Flexibility

You have a lot of flexibility with EC2–AMIs are essentially yours to customize as you want, starting up another node takes about 5 minutes, you control your own DNS, etc. However, there are some things that are set at startup time. Make sure you spend some time thinking about security groups (built in firewall rules)–they fall into this category. Switching between AMIs requires starting up a new instance. Right now we’re using DNS round robin to distribute load across multiple nodes, but we are planning to use elastic IPs which allow you to remap a routable ip address to a new instance without waiting for DNS timeouts. EBS volumes and instances they attach to must be in the same availability zone. None of these are groundbreaking news, it’s really just a matter of reading all the documentation, especially the FAQs.

Documentation

Be aware that there are a ton of documentation, one set for each API release, for EC2 and the other web services that Amazon provides. Rather than starting with Google, which often leads you to an outdated version of documentation, you should probably start at the AWS documentation center. This is especially true if you’re working with any of the systems that are newer with perhaps not as stable an API.

In the end

Remember that, apart from new tools and a few catches, using EC2 is not that different than using a managed server where you don’t have access to the hardware. The best document I found on deploying drupal to EC2 doesn’t talk about EC2 at all–it focuses on the architecture of drupal (drupal 5 at that) and how to best scale that with additional servers.

[tags]ec2,amazon web services,capistrano rocks[/tags]

A survey of CDNs for use with Drupal

I have spent some time researching Content Delivery Networks (CDNs) and how they can integrate with Drupal.  Note that I have not yet implemented a CDN solution, so my experiences and opinion may change….  I will try to do a second post or update when we’ve actually rolled something out live.

Here are some criteria I’d use in selecting a drupal module for CDN management:

  • Do you need a CDN?  This is the key question, as a CDN can speed up your site, but introduces a layer of managment and expense that might not be worth the hassle.
  • Do you mind patching drupal core?  This might be a maintenance issue going forward.
  • Do you want to have just images on your CDN, or javascript and CSS as well?  What about video?
  • How contained within the drupal interface do you need your interactions with a CDN to be?  Are you comfortable using a third party tool sometimes?
  • Do you have an existing CDN to work with, or are you selecting a CDN from scratch?  Obviously, you have more flexibility in the second case.
  • Do you mind coding? Some of these modules seem like they are 75% of the solution, but you might need to write some code to finish things up.

There are a number of modules that attempt to integrate a CDN into Drupal, or might help doing so.  All of these had a release for Drupal6.

  • CDN: this seems like a great fit.  Active development, good sized issue queue, support for multiple CDNs.  It also patches core. Here’s a list of CDNs used with this module.
  • media_mover: this module seems like it might be useful if you were needing to move image and or video files to a CDN.  That might require some coding, although I remember there being some S3 and FTP support.
  • creeper: this module is all about Amazon API integration, including CloudFront.  Plus, what a great name!
  • parallel: fairly new module that changes the source hostnames of images, css files and javascript html tags.  Therefore, they can be served off a CDN, or another web server, etc.
  • storage_api: this is a general storage service with a CDN focus, but doesn’t appear to be well documented or supported as of this time.
  • cloudfront: adds Amazon CloudFront support to the imagecache module

These all seem to be useful in their own ways.  The current project I’m working on is already invested in the Amazon infrastructure, mainly because of Project Mercury, so cloudfront is our current choice.

Did I miss any key modules?

[tags]drupal cms, cdns rock[/tags]

The Drupal Experience

AKA, my time with the blue droplet.  I recently built a website for a client.  I initially recommended wordpress, as I often do but the client suggested the website would grow into an application.  You can certainly do webapps with wordpress, but it seemed worthwhile to look at alternatives.  Drupal 6 seemed to fit the bill:

  • flexible
  • lots of documentation (always important in any evaluation)
  • great community
  • tons of additional functionality (called plugins)
  • customizable UI
  • multiple languages supported

So, the site is launching soon and I thought I’d jot down my thoughts.  Keep in mind that I’m very much a drupal newbie.

The good:

  • I was astonished at the plethora of modules.  In fact, just about everything we needed to do functionally was done; it was a matter of getting the right modules installed and configured.  I don’t think I wrote a single line of code, though I did do plenty of code reading.
  • The look and feel were very customizable.  I ended up using a large number of blocks (isolated chunks of content that you can control and place).  The ‘clean’ theme was pretty easy to customize via CSS.  The Web Developer add on for Firefox was invaluable for this process–just hitting ‘control-shift-C’ and mousing over a component let me know what selector to use.
  • WYSIWYG support was pretty good, including image uploading (as a separate module), control of which of the TinyMCE buttons to display, and a dropdown to surround text with custom css class names.  I ended up using the HTML Purifier to scrub input from admin users.
  • The webform module allows non technical users to create complicated forms quite easily.  I didn’t really push this module, but it appears to have support for multi page wizards and other complex stuff.  But just using it for validation of required fields saved a lot of effort.
  • Drupal6 has a robust upgrade mechanism.  At least, it sure seems like it–I have upgraded modules and the core functionality a few times and haven’t had any issues.

My issues:

  • We started off on a medium strength server, and were not caching much, and performance was a big issue.  There are a number of resources out there, and some research is worth your while.  Drupal is a big, complicated system, and like all big complicated systems, tuning is a continual process.  Luckily, there’s a fair bit of built in caching that can be enabled with a checkbox.
  • Deploying from one server to another is an issue.  I talked to a friend who does a lot of Drupal development and he mentioned it was a real thorn in the side of drupal.  There’s just not that much available supporting the ability to move code from staging to production and content the other way.  To be fair, this is an issue for most CMSes, and there are some project (deploy is one of them) attacking this issue.  I don’t know of any open source CMSes that solve this problem entirely.
  • The SEO components seemed pretty good, but I was surprised that they weren’t bundled with the base Drupal installation.  You have to install a module to enable meta tags.  And another one for page titles.  Page titles!  (I looked at but was quickly overwhelmed by installation profiles, which might obviate some of the module installation.  There didn’t seem to be much support for Drupal 6 profiles.)
  • The complexity of Drupal, which allows it to do so much, means that there’s a lot to learn.  It can be a bit overwhelming at times.

The long and short of it:
If you’re looking to build a web application (not just a site), have some php expertise and some time to get up to speed, and need a lot of functionality, Drupal is worth looking at.

[tags]cms, php cms, drupal, blue droplets keep falling on my head[/tags]