Office 2016 for Mac

First of all, this is in no way a product review or anything that should resemble any sort of insight into Office 2016, as the sidebar suggest this is just my inane ramblings on the subject, be it insightful or just utter crap.

For the longest time it seems that the Office team and the Mac team at Microsoft have never really communicated except for throwing the odd document over regarding docx compatibility or some other quirky VBA feature they added in to sooth some crazy corporate. The Mac version of Office has always been the step daughter that the family are pleasant about but don’t really want to talk about them, from the Mac user perspective they’ve always come out with a workable product but it had its odd quirks and never really fitted in with the Mac OS UI/UX style guidelines.

It seems this has changed, as many people have commented in various blog posts over the last year or two that Microsoft is changing, faced with several big companies slowly eating their lunch while they’re not looking it seems that the new batch of products being thrown out are aimed to impress, and Office 2016 for Mac is one of these.

I mean, look at it.

So OK, its not all about the GUI fluff, what really counts is the UX and how everything works together. I feel confident to say you could put someone who is a heavy Office for Windows user in front of this version and they can use it without any major issues. What bugged me in the past is that Office for Mac had its own style and layout which people had to get used to, it seems that the Office and Mac teams are actually working together now and the result is a great product.

I’m sure that after a few weeks usage i’ll be at my usual situation of wanting to pull my hair out trying to get something done, but for the moment i’m basking in the UI goodness…

DVB-T and SDR with the RTL2632

I spotted a Youtube video the other day that talked quickly about SDRs (Software Defined Radios) and how you can pick up one for $20, which is a massive difference from the £300-400 devices I spotted a few years ago. Of course, I decided there and then that i’d grab one to experiement with and searched Google for the mystical device. As it turns out its the Realtek RT26832 based devices which allow SDR type functionality, and while a lot of devices out there are higher than the magical $20 due to them being advertised as a SDR it was quite easy to find one of these generic DVB-T tuners with the right chipset on Amazon for a grand total of £9. With the order being elegable under Amazon Prime I ended up ordering the item yesterday (Saturday) and it was delivered today (Sunday).

So, straight out of the box and into my Debian Jessie test system and everything worked, no tweaking or hassling, within seconds I had a working DVB adapter and I used the standard DVB tools to scan and create a channels.conf within a minute or so. My last experience with the LinuxDVB stack was around 2005-2006ish with MythTV, the drivers “sort of” worked and everything was a little rough round the edges, it seems the last 10 years have really cleaned up the stack. With that in mind its not really worth posting about getting the DVB-T tuner to work, because it just did…

SDR required a little extra work. I’ve not spent a large amount on time trying to get the full toolset to work on Debian, the rtl-sdr toolset is available as a package with Jessie and can be easily install, the biggest problem was that because I was using my test system I didn’t have a X session running to run anything on. I got everything installed and spun up a rtl_tcp instance without much incident, the biggest roadblock was that you can’t have the DVB kmod inserted at the same time as using the rtl-sdr package tools, but a quick rmmod and blacklist sorted that out, the tools are very quick to point out exactly what needs to be done.

Instead of working on Linux I got everything up and running on my Macbook Pro running OS X Yosemite, while OSX doesn’t have the full suite of tools available a few good ones have been developed for the platform. I found that CubicSDR was by far the easiest to get rolling with, no messing with MacPorts or any other third party packaging tools, just a DMG and a pre-packaged application. While it isn’t as feature complete as some of the other packages out there it does cover the basics to go poking around. Their todo list does looks interesting, especially with the target of having digital demodulation built in.

Quick overview done and i’m now looking for a better antenna. While not being used as a SDR the stick itself will be happily serving as a DVB-T source for my Plex system using TVHeadEnd and with a quick MCx to Coax adapter you can have it plumbed into the household aerial without much issue.

[Update - 201508/31]

Regarding the links earlier on in this post, it turns out that the two tuners are actually different, with the Nooelec it includes an improved tuner chip (the R820T2). As it turns out it is worth investing the few pounds more for this version as its more sensitive and also includes a more stable crystal that won’t require much adjustment.

Heroku and NextAction

A while ago it was announced that Heroku would be changing its price structure, after a few minutes with a calculator I worked out that it’ll be essentially better for my bigger apps, and not so much for the small stuff I run. At the time I totally forgot about the NextAction app i’ve been running for a very long time to manage my Todoist instance. For what the application does and how it runs its really not worth the $7 to host on a full Heroku instance…

So with this in mind I put a few hours into the tool today, it now has a proper and a basic CLI interface, just enough to get it running on my VM host without much issue or changes.

If you’re interested in my branch of the tool, check it out on Github.

Starting Again

This is possibly the 5th iteration of a personal blog i’ve had over the years, I think I started around 2001-ish with a b2/cafelog setup, the software that eventually gave birth to Wordpress. I’m not really much of a blogger and my total number of posts over the years probably total up around 500-600, the vast majority of which have been lost forever between the several system moves and lapse of hosting.

So, what now?

I’ve been a big fan of static generators, I’ve never really felt the need to host a heavy solution like Wordpress and after helping out a few sites in their early days struggling with load i’d rather wish to avoid doing that all over again. Jeykll and Pelican seem to be the “big boys” in the static generator scene, but a new startup of Hugo wrote in Go seems to have that right balance of simplicy and features. In terms of hosting i’ve actually cheaped out and took advantage of Github’s static page hosting, with a simple CNAME I now have a fully featured website with just the bits I need.

For those interested, my site’s code is up on Github. Heres to another few years of idle blogging.

Service Shutdown

As some of you may know, i’m getting married, and with that comes the large cost of the actual wedding day itself. With the price creeping up with every wedding fair we go to (it seems, bitches love chocolate fountains) so I have a mighty need to save as much money as possible. With that I can’t really warrant the £70/month hosting bill i’m having to pay out for various freebie services I offer out to my fellow TEST members and other stuff in general. So the following services will be shutdown:

  • EVEThing
  • Testcursions
  • Scantool /
  • Meowbet
  • Dropbot
  • Queuebot

For the apps that have been custom build for EVE stuff, i’ll organise open sourcing them in a few weeks.

The Grand Evernote Reorganisation

I’ve been an Evernote user for a few years now, and its really changed how I keep track of “stuff” I need to keep stored for the future, be it scanned copies of my rental contract for my house or that 5 line how-to guide I found to fix an annoying issue I have. I don’t think I could last a week without checking in with the reference i’ve built up over time.

I’m by no means a heavy user, my Evernote store consists of around 350 well curaited notes, most of which have a large PDF scan or image attached. My notebook collection has evolved nautrally with my usage and its ended up with around 50 notebooks for my very meager number of notes, so it was time to simplify.

The last day or so i’ve been reading a few blog posts from various Evernote users, from the heavy hitters down to the people who use it as a scratch pad for ideas. What I found is that a lot of people seemed to share the same consistent view; split down the important bits into notebooks, the rest can live in one “filing cabinet” size notebook as long as they’re well tagged. Jamie Rubin documented his notebook reorganisation and his method really made sense to me. Afer around of poking around I had the following layout:

  • Personal
    • Medical
    • Banking
    • Documents
  • Professional
    • Certifications
    • HR
    • Job 1
    • Freelance
    • Payslips
    • P60s
  • Reference
    • Contracts
    • Receipts & Warranties
    • Repairs & Service
    • Filing Cabinet
  • Travel
    • Europe
    • North Ameria
  • Shared
    • Public Notes

I still have some crossover between the notebooks, but its a lot better than the previous 50, i’ll have to let this bed in for the next few weeks to see how this works, but i’m already experiencing the advantages of having a single “Filing Cabinet” notebook with all my miscellaneous clippings in. I’m sure in the near future i’ll have to start mass tagging items instead of depending on flicking through the list of notes, but thats for another post…

Flask, EVE, and no persistence

Recently, another EVE related web-app idea popped into my mind and due to the generally low impact nature of the application I didn’t require a backend data store. For a long time i’ve used Django for nearly anything and everything due to the batteries included nature of the framework, but with this application I could throw it all away and start working with Flask; something i’ve been meaning to get my teeth into properly since I started my large Django based projects.

My tool is already a solved problem, but as is the way of development and EVE i’ve set re-inventing the wheel for the sake of “security” and “counter-intelligence”, well, I spin it up that way but really I just wanted to try and do it for myself. In the last few years EVE has had a small UI overhaul which now allows almost anything to be copied and pasted outside of the game, the bonus is that once inaccessible scans, inventory lists, and channel member lists are now sources of information to be parsed and worked with. A common tool to come out of all this is a “D-Scan” tool that allows quick parsing and overview of the scan results from your directional scanner, over the last few years a good scan parser has become an essential tool of any FC and scout.

In my app i’m taking a new twist on the tool, trying out a few new views and consolidating some of the loved features from other tools into one that I can use. In the process of developing this i’ve set myself a goal of not having this tool depend on a database in anyway, instead using Redis as a caching backend for the various APIs and data stores needed.

The first big problem you need to work with is the EVE SDE (Static Data Extract) and the “Inventory Types”, this table of around 50,000 rows is something the tool will need to categorize the scan correctly. The positive here is that the SDE doesn’t update that often, only when we see content releases will the SDE be updated by CCP and even then the world isn’t going to end by not having the latest and greatest SDE to work with. So my solution was to have a package data file populated with a JSON extract of the data I need and when the data is needed its loaded into memory, the relative memory increase of 1-2mb of RAM is nothing in the overall scheme of the application.

So what about the actual scans and results? Parsing the d-scan data is relatively quick, as its essentially a tab delimited file of a fixed format, combined with a few quick lookups of reference data which is all held in a dictionary in memory makes even a taxing Jita d-scan get processed in a few milliseconds without any major optimization. Once the initial parse is done then the results are dumped to JSON, compressed with zlib, then dumped to a unique key in Redis with a expiry of an hour. The view to show the scan results does nothing more than to take the key from the URL and attempt to grab the results from Redis, decompress, and pass the resulting parsed JSON to the template.

The deployment target is Heroku, and ideally the Heroku free tier, so this has dictated some of the design, for example the zlib compression of the resulting scan is there to shave off as many bytes as possible to get the maximum use out of the Redis 25mb services you have available, with the requests we’re CPU rich but storage poor, so the trade off works quite well in this case. So how would this work in a DoS issue? If one person keeps spamming large d-scans into the system would the Redis server fill up and stop working for all? Well, no, as the config will be set to expire the oldest keys in the case of low memory which would work perfectly for our tool.

Open Sourcing Past Projects

Over a year ago now I stepped back from being a system administrator and developer for the EVE Online alliance Test Alliance Please Ignore, part of my role there I spent hundreds of hours developing their internal authentication system and other small applications that hang off it. At the time it was quite unique and only a handful of other alliances had that level of technical setup. So as you would expect like a small company with something to lose the code was buried away on private servers and rarely looked over by new people.

Today is a very different place, Auth was created at the start of a open source revolution for EVE Online applications, and over time more and more have started becoming open with now specific projects being spun up (such as ECM) to create tools, or large alliances (Brave Newbies) opening their backend for everyone to use.

The repository copies of the code I have a quite out of date, and i’m purely the copyright holder of them, which gives me the power to license and open them as I see fit. Now that its been a good amount of time since I left I feel I can safely release these into the public domain without doing any disservice to TEST and the current sysadmins.

So over the next few days i’ll be looking to move the following repositories from my private Bitbucket over onto GitHub:

  • nikdoof / cynomap
  • nikdoof / django-testauth
  • nikdoof / limetime
  • nikdoof / pacmanager
  • nikdoof / posmaster
  • nikdoof / test-auth

All in various states, but hopefully useful for someone.

Python Packaging, The Right Way

Last night I spent a hour or so packaging up some Python I made to scratch an itch into a distributable module. Packaging has never been my strong point and I always ended up making a fiddly that had some minor problems or didn’t work as expected. This time was especially noteworthy as I had a product that was both Python 2.7 and Python 3.3 compatible.

Thankfully Jeff Knupp posted about open sourcing a python project the right way, which covers getting your project setup right, making it easily testable, and getting it working on TravisCI.

So, my project is live on GitHub, is it useful? Probably not, but at least i’ve put it out there incase people want to use it or improve on it. Next on the todo list is some better documentation and more tests.

Easy bag organisation with the Grid-All


I’ve been eyeing up the Cocoon Grid-It for some time, it looked like the perfect tool to keep all that loose crap in my bag organised in a sensible and easy accessible way. The only thing that pushed me away was the price which just seemed a tiny too high for the actual product itself, I understand its quite a unique idea and they put a tax on that, but still for a stiff piece of card with some woven elastic over it?

Thankfully the other day I came across a Chinese “competitor” version, a little bit cheaper and into the price range I was looking for, so I snapped one up on Amazon and gave it a go. When I received the package yesterday I was a little shocked that it came in the exact same packaging as the Cocoon versions you see in the Apple store. It seems that Cocoon has suffer the same fate as other companies, their unique product designs are re-badged by the factory that produces them for their own direct cut of the product. The most common victims of this are the electronic cigarette community where frequently you’ll have clones or the actual mass produced products rebadged and sold at a large discount over the original creators. Its a shame to support these type of tactics but I originally thought it would be a simple competitor instead of a direct factory rebadge and rip-off.

The product works exactly as its designed, holding tight even small USB OTG cables, and the rubber threading through the elastic keeps good firm grip on any items you put in it. I picked up the 30cm x 20cm version which is large enough to hold the bits I need on the go, and it easily fits into my Crumpler laptop bag with lots of room to spare.

So, i’d highly recommend it, but get one from Cocoon, and support the original innovators and not the knock-offs.

Always Catch Errors

On the 31st of Jan my NAS stopped responding, no idea what was going on and with zero response to the power button I did a hard reset, I spent the next few hours double checking all my config to find out what the hell had happened. I couldn’t find a solid reason, but at least none of the hardware was failing which give me some good news, I marked it down as an odd issue and carried on.

The same happened tonight, exact same result but this time I was prepared to some point. After attempting to login in the console and seeing memory allocation errors, then SSH dying on its arse, I checked my Munin install and notice the memory was heavy swapping. This machine has about 8GB of RAM but at any time its using about 600mb, at first I thought it was a memory leak in something but usually OOM Killer does a good job smiting any unruly processes. Then I checked my process list and noticed it was well over 4,000 sleeping processes, something had obviously gone wrong.

On my Deluge setup, due to the instability of a few of the trackers I use, I have a small Python script that checks the current state of the torrent and if they’re “red” it restarts them. Deluge’s API uses the Twisted framework to make everything async and accordingly a lot easier to work with, this was my first venture into the land of Twisted and it seems I made an error; I didn’t catch the “unable to connect” error. So after it was unable to connect the Twisted reactor was sitting there and running constantly, and as this job was running every 5 minutes it stacked up over 24 hours and killed the machine.

So, its always worth checking for errors, and not assuming that it’ll sort itself out. Lesson learned.

The Big Purchase

For what feels like forever i’ve been agonising over which laptop to buy to provide me with a decent, portable development environment. I dislike being stuck at a desk, for long periods i’ve always had a decent machine to sit on my lap and pound away at when the ideas come to me, in a previous life it was my tiny Eee PC, and before that a 12” Powerbook. For the last four or so years most of my code has been wrote on a i5 home build PC running Windows, and now that the machine is a little long in the tooth and starting to drag I decided it was time to change.

So today I went to pickup a 13” Macbook Pro, middle of the line one with 8GB RAM and a 256GB SSD, the old PC will be retired to gaming and the Macbook will be my go-to machine for general browsing and development work.

I’m not a “cult of Mac” member, I’ve had the odd Apple devices in my past but theres no way in hell i’m going to drop my Android phones for iOS devices. For me, Apple provide some excellent hardware that can’t really be equaled at the moment, and the “Apple Tax” has slowly disappeared since the switch to Intel hardware they present themselves as a well price device in the upper mid range laptop market. I had a look through the various competitors over the last few days and while they produce hardware around about the same specs and price as a Macbook it seems theres always concessions, better CPU, lower spec GPU, lower price, lower quality screen.

I’ve also come to realise that the majority of my workflow (Evernote, RTM, YNAB) have all OSX versions with equal or better quality, the odd game I enjoy playing on the go have equal versions on OSX as well (Prison Architect, FTL). So for me the Macbook sneaked in as a choice. I know some people will berate me, but hell its just a laptop, and it works for me.

The Doxie One Scanner

As some may know, I have a utter deep hatred for all things Epson now. Around about 3 months ago I went to print something and the printer delightfully informed me that my Pink was low and that I should replace it, never mind that I was printing a letter purely in black and white, it still wanted the pink. We didn’t have a cartridge to spare so I had to go get one from the shop, thinking that all the other levels were ok. When I got back I was frustrated when I switched it out and the printer ended up prompting for cyan as well.

Needless to say, its now sat in the junk pile and a nice cheap laser printer has replaced it.

…but this isn’t what this post is about. The real frustrating part of using a multifunction Epson printer is that you can’t use any other functions of it unless the ink cartridges are ok, which really kicked me in the arse as I used the printer to scan in all my documents into Evernote. So now I had to find a replacement.

I had a look through the market, and the amazing expensive but excellent “Evernote Scanner” which is a rebranded Fujitsu iX500, and some dirty cheap flat beds from eBay but one did catch my eye;

Doxie One

The Doxie One. Available for £99 (at the time of writing) on its a small and compact scanner that can operate without a PC, dumping the resulting files on a SD card in a standard DCIM file layout, so anything that can read a camera’s SD card can take these images. Excellent.

The Doxie series of scanners are made to be small and useful, its bigger brother the Doxie Go also adds a higher scanning resolution (600 DPI) and a internal lithium-ion battery. This isn’t to say the One is inferior, it has a 300 DPI resolution and takes AAA batteries to just as portable as the Go.

So the scanner itself works wonderfully, it chewed through my 40 page tenancy agreement with ease and the software allowed me to clean up the pages and “staple” them all together as a nice single PDF to store in Evernote, while it doesn’t win any awards for image quality (after all its 300 DPI) the speed and portability of the device really does make up for.

If you need a small, portable scanner to plow through your odd documents and receipts ready to be stored in Evernote or its kin, its an excellent buy and i’d suggest it.

deluge-webui and Nginx

After a short, frustrating time, i’ve finally got the Deluge WebUI to proxy through Nginx without any errors. The revelation came when digging through the Deluge forums I found a little nugget of information which solved it all, a small header call X-Deluge-Base that when passed will prefix any media calls made in the page with that text. So instead of setting up weird aliases and fiddling around with Nginx’s options to get it to work I could just specify that and use a very basic server config.

upstream deluge  {
  server localhost:8112;

server {
  server_name  deluge.home;

  location / {
    proxy_pass  http://deluge;
    proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
    proxy_redirect off;
    proxy_buffering off;
    proxy_set_header        Host            $host;
    proxy_set_header        X-Real-IP       $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header        X-Deluge-Base   "/";

Back to YNAB

YNAB has always been an excellent tool for me. I started using it around this time last year when I discovered the snappy Youtube videos, and the free demo. Within a few weeks I was hooked on using it, and I saw the massive advance quickly when I was able to manage my Christmas pay and put a hefty lump of money into a savings account, and all I needed was something to track my money.

After 7-8 months the sheen worn off as I was fighting with 60+ categories, a newly added credit card, and quirky statement importing, and around October I essentially gave up. From there my financial situation didn’t get worse, but I started to slip on the credit card and with them upping my limit another £1,000 really didn’t help matters.

So, another year, another fresh start with YNAB. Looking back I identified a few problems which really started to kick me after time and made my budget a hell of a lot more complicated;

Multiple categories for the same type of things

I found that I had multiple categories for essentially the same thing. Best example is my personal site hosting which was split down into Linode, Digital Ocean, Amazon, and Domains. I suppose splitting out is a good way to identify and budget for specific recurring bills, but for me Amazon store purchases got mixed in with my AWS charges and it started getting messy quickly.

So this time round i’m having a generic “Hosting” category to budget into, £15 a month should cover all my costs and also save up some money for those 2-3 year domain renewals. Sticking to one category should hopefully make my money a lot easier to manage in this area.

Make use of split transactions

For a long time I would do our weekly shop at Tesco and end up picking something up for the house, either new bulbs, paint brushes, or a duvet. The problem is I never sat down and split out the transactions into the food shopping and the house budget, which usually ended up with the food category taking a beating all the time while the various smaller categories never get touch, and end up being raided to make up the shortfall in other areas.

I suppose this was compounded by split transactions being unavailable in the mobile client up until the middle of last year, by which point I was already in a bad habit and wouldn’t change so easily. The lesson here is that splitting your transactions will pay off, even if it does feel like you’re wasting time doing them.

Simplify your categories

At the end I had 72 categories for my spending, as you can imagine trying to fill out the budget on payday took some time. I spent so much time flicking through my categories and assigning £10/£20 here and there that by the time I got down to my savings accounts I was putting in around 10-20% of what I originally was doing in January. Because I had a budget for damn near everything it seemed like I could validate spending it, on new techy toys and vaping gear.

At the restart i’ve settled at about 20 categories that seem to represent the core spending in my life. I have the bonus that my girlfriend takes care the household bills from the joint account, which simplifies my view as a single payment to the account on a monthly basis. If you get too far down into the categories then you’ll be in danger of validating any type of spending, just as I did.

Reconcile your accounts

At first I never bothered using the reconcile function within YNAB as my totals always “worked out”. After a few months of using it I discovered strange situations where the total amount of budget money didn’t match up to the total available, its a common problem with YNAB and its heavily covered in various forum posts.

One of the biggest tools to assist you with any oddities is reconciliation. Grab your statement and sit down with YNAB and comb through your statements, its a great way to pick out that incorrectly categorized purchase or the missing 53p you can’t find anywhere.

Also, if your bank offers the feature, import and match your bank records. It takes a few minutes to do and can quickly pick out oddities. Also it has the bonus of marking items as “Cleared” so showing you within YNAB your cleared balance, which is handy if you’re out and about with just your phone.

Use the community

YNAB has a large and popular forum community, make use of it to answer your questions or generally get support with your financial situation. While its mostly keyed towards the American audience, they’ll help you with YNAB. In the UK i’d suggest using Money Saving Expert for asking any financial question without paying for legal or accounting support.

So, I hope this has been helpful to you, i’ll keep you posted with how I do in 2014.

Brother QL-570 and Linux

A few days ago I picked up a Brother QL-570 cheap on Amazon for the other half, as shes about to setup her own online shop and needs to throw out a few address labels. While I did check up online that it was supported on Linux I didn’t really look into how well its supported, and unfortunantly its not good.

Brother have released a driver set for the device, but it does have quite a few issues that present massive stumbling blocks. The deb packages seem to be setup for Ubuntu only, and die horribly when installed in Debian due to CUPS using a init file of “cups” instead of “cupsys”. A hacky way around it does exist, but honestly this is more poking than should be done for a simple deb package.

It seems that the driver has a few issues regarding configuration settings, define too many in CUPS and processor tool Brother includes segfaults, a fix does exist but a general lack of interest in this issue seems to perpetuate this error, even when a patch is available to resolve the issue. The driver has gone unchanged in Ubuntu and Debian for many years now.

Thankfully, it seems that Linux isn’t the only one affected by this issue. After several hours of frustration trying to get the device to work I gave up and setup the printer in CUPS as a raw printer, allowing my Windows PCs to use it with the official drivers, only for it to happen again…

So, why the post? Well if you’re looking for a Linux compatible label printer i’d advise to stay away from Brother’s offering. Mine works at the moment, but its doing what I really wanted it to do.

Django MultiDB to the rescue!

One of the key components of Auth is the ability to communicate with numerous other systems and manage their user authentication systems in situations were we can’t modify the application to use our authentication API. To achieve this we have the Services API, which is a generic interface designed for basic operations such as creating a user, disabling an account as so on. If we wanted to support a new system we write up a simple Python module with these functions and the API does all the required importing and abstraction out for Auth.

In previous versions of the API we handled databases in a very weird way, either by writing our own SQLAlchemy queries or boding the Django ORM to give us a basic cursor to work with, while it was far from perfect it allowed us to edit the databases of other applications without much hassle.

Recently, Django has updated into version 1.2 and with it came the MultiDB functionality which allows you to access multiple databases natively in the ORM. For our database layer this presents some new options not available to us in the old versions.

Using the Django database introspect you are able to generate a Model from a existing database schema, for this example we’re working with Mediawiki’s native database in MySQL. So first of all we need to define the database in our settings.

    'dreddit_wiki': {
        'NAME': 'wiki',
        'ENGINE': 'django.db.backends.mysql',
        'USER': 'wiki',
        'PASSWORD': 'passwordgoeshere',

Next we fire off the inspect command to produce our database layout

./ inspectdb --database=wiki >

After a short time you’ll have a fresh Python module with all your database models nearly ready to go, first thing to do would be to edit this file and change any foreign keys to the required Django ForeignKey() field. While inspect does as much as it can it can’t detect foreign keys.

Once your read to rock its a simple case of getting your shell out and giving it a test run.

./ shell
>>> from wiki.wikimodels import User
>>> User.objects.using('wiki').get(user_id=1).user_name

Simple! No more hassling with db cursors, just simple ORM access without the hassle. The next big leap is defining the database connections at runtime, injecting into the DATABASES variable, by doing this I can remove the problem of having to manage the services’s database connection in the and instead have them defined on a per service basis.

Hacking the ZTE MF627

Its been a while since I’ve done a good hack article. so again I’m back onto my favourite topic of 3G modems. Thanks to the generous promotions at 3dongles4free I’ve been able to pickup a new Three dongle for next to nothing. As I’ve already got my E160G I didn’t really need this to be on the Three network. After a quick search around and a few suggestions from existing mailing lists I’ve found out that a hacked firmware exists and these cheap and cheerful dongles can be flashed to allow any SIM card to be used. This should be a simple job of updating the software and using the new SIM card.

First of all, grab the software pack from Rapidshare, due to the questionable nature of this copy of the firmware no one has been willing to host it on their own hosting, and I’ll keep to that idea. Extract the files from the RAR and you should have a firmware upgrade, and a installation folder for the connection software. As the existing Three connection software is very limited, the software package includes the Telstra version which allows you to define your own settings. Before you attempt the software upgrade, you need to remove any existing Three software, install the Telstra version and remove your SIM card from the dongle, then simply plug it in and run the firmware upgrade. This process will take around 15-25 minutes and once it’s done it’ll give you a prompt. During the upgrade do not power off your PC or remove the dongle from the USB socket. This will brick your dongle rendering it completely useless. Now, put in your non-Three SIM card and plug it back into your PC, the Telstra software should start-up and try detect the device, you need to configure the software for your provider’s APN settings, but the PDF document included with the software package will give you all the details you need. Remember, I take no responsibility for people bricking their equipment, you have been warned.

Send SMS using Debian and a Hunawei E160G

People who use their Huawei E160G on Three will know that in the Windows client you can send and receive SMS, this will come at some minor cost of £0.10 per SMS, and you can add bundles onto your mobile broadband account to make this cheaper. Similar functionality can be achieved in Linux, and it’s very useful if your like me and want to drop someone a message when you don’t have your phone around. For this we’ll be using Gammu, which is a toolset for managing phones via the AT GSM command set. It was originally forked from Gnokii, which was a similar toolset for Nokia handsets. As the E160G opens a serial port with access to the AT command set this is a relatively easy tool to setup. First of all, we need to grab the packages. As these are standard Debian packages you should have no issues.

# sudo apt-get install gammu

Next, we need to configure Gammu to pickup the correct device. Check your dmesg for the serial port:

$ dmesg|grep tty
[12321.308078] usb 5-3: GSM modem (1-port) converter now attached to ttyUSB0
[12321.308275] usb 5-3: GSM modem (1-port) converter now attached to ttyUSB1

Edit ~/.gammurc, or run gammu-config to change the device settings. Your ~/.gammurc file should look similar to:

port = /dev/ttyUSB0
model =
connection = at19200
synchronizetime = yes
logfile =
logformat = nothing
use_locking =
gammuloc =

Give it a test by getting all the SMS from the device:

# gammu getallsms

This should bring back all the SMS currently stored on the stick, which should include your login details for the Three website (unless you’ve deleted them). To send a SMS use the “sendsms” command:

$ gammu sendsms text 07874454543
Enter message text and press ^D:
Test Message!!!!!1!
Sending SMS 1/1....waiting for network answer..OK, message reference=2

Gammu has a lot more tools and options to explore, now you have the basic config you can setup a SMSD, which can expose the ability to send SMS to a network. Also, Gammu has a python interface so you can possibly build your own frontend client for sending SMS. For more details explore the Gammu Wiki.