I’ve been loitering in #django on Freenode recently, and misunderstandings about static files have cropped up enough that I figure me putting this together might help.
A typical Django project will have multiple sets of static files. The two common sources are applications with a
static directory for media specific to them, and a similarly-named directory somewhere in the project for media tying the whole project together.
I’ve been using Heroku a lot recently for deploying code. For those who’ve missed it, it’s a rather nice Platform as a Service (PaaS) offering; think Google App Engine, but actually usable: no long list of forbidden or broken frameworks and libraries, you can use a relational database, and other such niceties.
I’ve had a number of people ask in conversation why I use (or indeed trust) Heroku and don’t just deploy on top of AWS. Then I came across this question asking “Why do people use Heroku when AWS is present?“ on Stack Overflow, which I thought did a pretty good job of explicitly covering some of the aspects of the choice.
Maybe I should have flagged the question as Not Constructive (it’s now closed because others have), but instead found myself using it as a bit of a dumping ground for my thoughts on the matter; if you’re interested, check it out, I hope it’s informative.
It wasn’t very long before I ran into an unfortunate issue. Every time I tried to add or delete a DNS entry, I’d be presented with a shiny happy “Success!” message and sent back to the domain list. I’d then see no evidence of my change.
So sometimes, it emerges, that this is just because they lie: “Success” doesn’t mean “Success”, it means “I’m sorry, but we won’t let you point a CNAME to a CNAME” (not a wholly unreasonable position to take, but not required, and seemingly viewed as outdated). Sometimes, it just means they sort-of-lie: “Success” means “Success, but our UI won’t update until you’ve logged out and back in again”, so the UI will display stale data – less of a lie, about as irritating, especially as it’s indistinguishable from lying-Success.
Let me tell you a story about a
call that changed my destiny tool that I find really useful.
To quote the website, “Augeas is a configuration editing tool. It parses configuration files in their native formats and transforms them into a tree. Configuration changes are made by manipulating this tree and saving it back into native config files”.
Originally this blog was started with the intention of being semi-anonymous – no great veil of secrecy, but enough to not be obviously trivially linkable with me. Partly because I don’t feel particularly happy with my writing ability and style, and wanted to develop this a little “quietly” as it were, partly because I felt this gave me the most “freedom” in terms of what I chose to talk about and how.
Well, I’ve decided that really, my writing isn’t *that* terrible, and I don’t think there’s anything I wouldn’t want associated with me, nor can I see myself posting such in future, so I’ve stumped up my $12 per annum for WordPress “subdomain mapping” and now I have a primary URL of http://blog.doismellburning.co.uk/. I use the doismellburning moniker for pretty much everything now, so it’s nice to at last bring my blog into the fold, as it were. Plus my friends who do marketing-y type things tell me that one of those “consistent personal brand” thingies is good
Why do I not host this myself? It’s not like I’m lacking in places to host a WordPress instance. However, I’m less than impressed with WordPress’s security history – see some examples. Also, I just don’t want the effort of maintaining it myself. However I really like it as a platform, so, WordPress.com is ideal. For my use-case, it’s free, and the annual fee to point my own domain at it is still worth it with respect to my time saved.
I do still need to look into a backup solution though – just because I like it, doesn’t mean I trust it with my data!
This is perhaps slightly unfair, but approximately, after following the Example Usage instructions from the Eucalyptus site for a good little while, and re-following, and tweaking, and checking, I had no joy.
I then googled a bit harder and found HybridFox, which worked first time.
Perhaps I could have gotten ElasticFox working. HybridFox meant I didn’t have to try.
My first encounter with Ubuntu Enterprise Cloud was while throwing together a quick dev server at work. I booted from an Ubuntu server ISO and saw the words “Enterprise” and “Cloud” together; I promptly dismissed it as some form of bloaty-buzzwordy-junk. It turns out the word “Ubuntu” trumps the “Enterprise Cloud” bit, and it’s quite awesome.
When I want to make someone’s eyes glaze over, I describe it as an Open Source Software Infrastructure-as-a-Service Solution, or OSSIaaSS. When I actually want to talk sensibly about it, I describe it as “Amazon Web Services for your own hardware”. At its core, it’s a software platform called Eucalyptus which gives your own private IaaS setup. Crucially, it exposes this via the AWS API, so there’s a wealth of tools out there, and it makes later migration to real AWS easier.
(I should clarify; I’m doing AWS a huge disservice here; really “all” I’m talking about is EC2 and S3, but they’re my favourite and most-used subsystems, and I’d guess they’re the two things people think of first when someone mentions AWS)
Generally it seemed remarkably easy to set up. I did manage to make a few silly mistakes along the way, that if I’m honest took an embarrassing amount of time to identify despite being relatively obvious things:
Don’t forget to add a Node Controller
Node controllers run your VM instances, so you’ll want at least one. Obvious, right? Well, I managed to forget (multitasking at the time) and spent a little too long wondering why I couldn’t start any VMs.
$ sudo euca_conf --list-nodes registered nodes: 10.250.59.29 llama i-2B980609 i-38940671 i-3B250746 i-3D2B07A3 i-44F408EA i-4A7B08DD i-555C09E0 10.250.59.30 llama
$ euca-describe-availability-zones verbose AVAILABILITYZONE llama 10.250.59.211 AVAILABILITYZONE |- vm types free / max cpu ram disk AVAILABILITYZONE |- m1.small 0025 / 0032 1 192 2 AVAILABILITYZONE |- c1.medium 0025 / 0032 1 256 5 AVAILABILITYZONE |- m1.large 0012 / 0016 2 512 10 AVAILABILITYZONE |- m1.xlarge 0012 / 0016 2 1024 20 AVAILABILITYZONE |- c1.xlarge 0006 / 0008 4 2048 20
Don’t forget to enable virtualisation in the BIOS
Once I’d added my Node Controllers, I went to start some instances, only to watch them sit in “pending” mode before terminating. Once I found my way to the Node Controller logs, I was presented with:
libvirt: internal error no supported architecture for os type 'hvm' (code=1)
At this point, kvm-ok (for Ubuntu Lucid, in the qemu-kvm package) is your friend. The machines I was using as Node Controllers (Dell R710s) all have a “Virtualization Technology” setting in the BIOS (under “Processor Settings”). On all of our machines (and I gather this is standard) this was set to Disabled. Rebooting and editing the BIOS to enable it was all that was needed:
$ kvm-ok INFO: Your CPU supports KVM extensions INFO: /dev/kvm exists KVM acceleration can be used
As an aside, I fully support this “bug” entry about disassociating kvm-ok from kvm and putting a “You have virtualisation support but it is disabled” pseudo-warning into the motd – bring on the next Ubuntu LTS release!
Potentially important “footnote”
One bit of weirdness I did find was that immediately after installing (once I’d remembered to create a Node Controller) was that despite the Node Controller existing, and claiming to having detected the Cluster Controller at install time, the Cluster Controller couldn’t find it. Prodding euca_conf gave nothing in –list-nodes, and –discover-nodes found nothing. However I was sort of able to cajole things manually with –register-nodes; at least, keys were copied to the Node Controller, but not a lot of success beyond that.
I then discovered this thread on the Eucalyptus forum of a user having an essentially identical issue – no NC discovery, manual NC registration appearing to work but not, et cetera, with a follow-up post that a solution reported in another (slightly longer-winded) thread had fixed things.
To repeat for posterity and archiving / Google reasons, the solution, with my own notes, was:
- Deregister all Node Controllers – euca_conf –deregister-nodes)
- Deregister the Cluster – I used the WebUI for this; I don’t believe that using the CLI is necessary
- Restart the Cluster Controller – I’m afraid I forget as to whether or not I did this
- Register the cloud again – At this point, using the CLI is important
- Discover the Node Controllers – euca_conf –discover-nodes
For whatever reason, the Cluster created at install time, and subsequent ones via the WebUI / GUI, had some sort of issue. I haven’t yet been able to diagnose much, nor find a canonical bug report, but this seems a potentially rather significant issue that may hamper people!
The above issue aside, my current experiences with it have been great. Now to get boto talking to it!
So tomorrow is Good Friday, the start of a four-day weekend. Today I received my CloudFoundry account details. I think we know how this is going to pan out – now to pick a toy project to try it out with!
Any suggestions welcome…
Things recently had been a bit quiet on the development front, and when the opportunity arose to get involved in some more “operational” things, I jumped at it. Faced with a bunch of machines spanning production, development slush boxes, and office servers, a general desire to clean them up, consolidate a variety of services, and generally just apply some consistency.
The machines in question had had a varied and chequered history – most were hooked up to authenticate via LDAP but not all of them; some ran SUSE, most ran some version of Ubuntu; various nominally identical/similar/clustered machines had a whole range of differently configured sudoers and packages, et cetera.
I’d already heard a lot about Puppet, “an automated administrative engine for your *nix systems, performs administrative tasks [...] based on a centralized specification.” and it sounded rather good. I took a brief look at Chef and Cfengine, which seemed to be the main competitors; Chef was discounted because Puppet seemed to have much more in the way of install-base, community and documentation (I also preferred the idea of a small DSL for configuration rather than “write some Ruby”; Cfengine seemed much lower-level and more in the “I have some scripts, push them out” sense – Puppet’s ability to succinctly express “ensure the package of this name is present” seemed far superior.
So far, my experiences have generally been excellent. Two things I’ve learnt so far:
First, modules are “just” building blocks. If it feels organisation-specific, it’s a service. This is documented in Puppet Best Practices but not something I fully appreciated until after I’d played around a bit more. Still, it all needed a refactor anyway!
Second, Puppet Forge and puppet-module seem truly excellent resources for grabbing other people’s modules to save yourself the leg-work. My initial foray into Puppet involved writing a basic module or two myself, to improve familiarity with the DSL and concepts, but seriously, unsurprisingly you’re not going to be the only Puppet user who has found themselves wanting to add an apt repository and keys or configure munin et cetera.
Ultimately, Puppet has been an invaluable tool so far in my current mission to bring some more sanity, order and consistency to these configurations, and I heartily recommend it.
I significantly overestimated the amount of effort installed, and was genuinely impressed. I admittedly haven’t played with any of the more advanced bits, but basic installation is so incredibly easy and featureful that I’m more than satisfied for now.
First step, pull it into my project. I’m using Apache Maven, so this was just a matter of scrolling to the Dependencies section of the documentation and copying in the requisite chunk of XML to add three dependencies and a repository to source them from.
Second, hook it up. In my case, this just involved scrolling back up to the web.xml section, and copying in another small chunk of XML to add a filter, filter-mapping and listener.
A whole 5 minutes later, and it was just like the screenshots! Charts of various attributes, request statistics, system information, thread monitoring and more. The app I’m currently working on is pretty light-weight, so I don’t have any database connections, batch jobs or other advanced fun to monitor, but if it’s anything like my experiences so far, it’ll be a breeze to sort.
At some point I’ll make time to try out the beta “Deployment on Tomcat without modification of monitored webapps” so I can get monitoring across a range of apps without them needing to care or know. For now, I’m definitely genuinely very impressed!