During my CodeMotion Berlin 2016 talk, I promised some references. Here they are!Read More
File this one under first world problems: I use a headless Mac Mini for scanning documents, using my trusty ScanSnap, to Evernote, but every once in a while Evernote takes its dear time to synchronize.
To remedy this, I whipped up a piece of AppleScript that syncs Evernote when there are unsynced notes in the notebook called "inbox".
The script can be found on https://github.com/angelos/EvernoteSync.
Trying out something new here: an overview of the news that caught my eye this week. Not a high-rolling week to start out with.
Netflix moves to Amazon's AWS (Ars Technica)
For the last seven years, it has been known that Netflix, one of the biggest movers of data on the internet, has been slowly phasing out their own data centers in favor of relying on Amazon's AWS offering. Having now closed their last own data center, they completed this move, showing that even massive scale isn't necessarily a reason for you own datacenters. Unless you're, like, Facebook.
Spotify moves (a little) to Google (Stratechery $, WSJ $)
In a non-related move, Spotify, another mover of data and poster child for AWS, is moving a minor part of its hosting to Google. However, this probably won't keep Jeff Bezos up at night: the infrastructure being moved comes from Spotify's own data center, and Spotify got a pretty good deal. Sounds to me like Google drumming up fanfare for its own offerings, and Spotify taking the bait.
Apple and the FBI (Ars Technica)
With the FBI requesting the unlocking of a locked iPhone in a pretty much indisputable case, and Apple's Tim Cook publishing an open letter about why he won't do so, the internet has clearly bifurcated in two camps.
On the one hand, we have the American people supporting the FBI, and we have the international Geek Community coming up with either good reasons why they shouldn't, and clever ways in which it can be done safely.
All things considered, this is a typical hard problem: damned if we do, damned if we don't. I feel Apple is rolling pre-canned privacy PR, to support their "we don't need your data"-case--but notice that this is not out of good will, but because Apple makes their money on hardware, in stead of advertising. As much as I would like to take the "either everybody gets privacy, or no one does," I feel that Apple is fighting the wrong fight here. Apple can't state that it's impossible to support the FBI in what they requested, and denying it will erode a lot of good will the company, and the tech industry in general. I fear ham fisted legislation coming up, putting us back to the crypto-as-ammunition days.
Sometimes it wise to pay the ransom (Forbes)
A Hollywood hospital has decided to pay up a 40 bitcoin ($17k) ransom to unlock their own files. Sounds like a good decision, and perhaps a stupidity tax, to me: after having their systems off line for a number of days, and not being able to recover from non-existent backups, the board decided to pay the ransom, and get control of their systems back. Probably the cheapest thing to do, hope they now invest in good backups.
Ransomware is probably here to stay. That is, as long as the quality stays up; a few bad apples might make ransomware into a lemon market, either by being easily crackable, or by not unlocking even when you do pay up. Crypto is still hard.
IoT still doesn't take security seriously (Krebs on security)
A worrying piece on the state of Foscam cameras: apparently these systems have a life line to a more-or-less shady Chinese support organization, without disclosure or opt-out. As Krebs puts it, "This is Why People Fear the ‘Internet of Things."
Amazon starts hosting games (Amazon)
Lumberyard is a games engine-and-hosting offering including CryEngine (the commercial game engine behind FarCry), Double Helix (components of a game studio), and backed by AWS. GameLift takes care of session hosting, and Twitch-integration makes this into a suspiciously full-featured, and hard to beat, game hosting offering.
It seems that Amazon is slowly but surely moving away from components (storage, computation, etc) into more vertical markets. Amazon's IoT offering is another one of these: a vertical that takes care of most of the issues in a market, while providing a convenient lock-in to the Amazon ecosystem. Be on the lookout for more of these!
Microsoft acquires Xamarin (Microsoft)
Xamarin, the company formerly known as Mono, and known for providing a toolkit for developing native mobile applications using C#, has finally been acquired by Microsoft. After a close encounter on BUILD 2015, where Xamarin support in Visual Study (along with Cordova integration) was announced, Microsoft has finally pulled the triggered and acquired the company outright.
Some may see this as Microsoft being a good technology citizen; for me, it's another nail in Windows Phone's coffin.
Last week, an article on CIO.com caught my attention: ARM will skip modems for now, despite being a huge force in mobile. I can only applaud them for this, as all too many companies have a hard time focusing on the things they do well--in ARMs case, creating an licensing designs for mass market, special purpose processor designs.
The problem with ARM also including modem chip designs is not that they won't be any good at it: it's directly in line with the current solutions they build (including Bluetooth designs), and will undoubtably be a huge revenue driver. In stead, there is a mismatch in business models: whereas ARM is very good a creating a small number of very high quality designs that are then licensed, diversifying into the modem market would lead them into a maze of different network technologies, regulations (more so than with Bluetooth hardware) and customers with very differing wishes, requiring close collaboration with each customer for each batch.
All in all, this decision is not about technology, but about alignment. ARM is the company that paused to wonder not whether it could, but whether it should.
All productivity starts with personal productivity. And every now and then I am asked about the setup I use to keep both productive and sane. I've now gathered this on a separate page, which I intend to update every once in a while. Enjoy!
Over the past five weeks, I've followed DelftX University's excellent Economics of Cybersecurity course, which is concluded with an essay on my opinion of a topic in this field. I've picked the incentives that relate to privacy, and how regulation is the only feasible way to bend the outcome to one in line with the common good.
Coming from a technical background, my main experience in security has been around security measures (known as “controls” in this course). I have noticed that technical measures rarely tell the whole story: there is no black and white in this matter, and decision makers don't have a use for “this is just safer”-type arguments. This applies not in the least to privacy-related concerns: while it intuitively it seems obvious that privacy should be a concern, it can be complicated to construct it into a business decision. The technology field has some uncommon incentives, most having to do with restricted friction that technology brings, that make it impossible for a market-driven situation to shake out such that the privacy concerns of the public are served best. I therefore believe that regulation is needed that pushes the liability to the party best suited to control privacy.
As a software professional, I am well aware of the non-intuitive relations within the world of technology. The lack of natural friction on transactions opens the door for extremes in either direction of market share: either a huge number of small players in a race to the bottom (e.g., the current state of health- and fitness-related products and software), which later progresses into a practical monopoly for the player who understands this game the best. In both situations, either pre- or post-monopoly, market parties tend to make the choices that make best sense for their business goals. The winner-takes-all dynamics of this market means that the effects are amplified once one party reaches a practical monopoly situation. From my personal experience, I’ve seen that the direct business-incentives for user privacy are never those of the commons. This is exacerbated by (a) either unknowing or intentional company policy to make use of the wealth of information that many market parties are currently laying their hands on, and (b) the fact that most consumers currently willingly divulge information for marginal benefits, putting no pressure on market parties to change this situation.
The current situation—-customers don’t care, market parties are driven by business incentives—-is an unstable balance, waiting to break. On the one hand we see data abuses, breaches and security flaws running wild in various industries, such as insurance (Anthem), automotive (BMW) and appliances (Netatmo) which don’t get the backlash one might rationally expect. On the other, we see an unwitting movement towards trust no one (TNO) systems, such as Whatsapp’s inclusion of TextSecure’s end to end encryption, producing marginal security awareness in the public.
Some time in the near future, I expect these two trends to meet, likely in a large-scale breach whose effects will reverberate in the minds of the public. This can undermine public trust in all market parties that process private information, not just the bad apples of the industry, potentially halting technological progress as the public cannot distinguish good actors from bad ones, and turn their backs on both. The situation outlined above shows that market pressures will not produce any of the desired effects. Privacy, in this sense, becomes a communal asset, which needs (minimal) legislation to shepherd it. In situations of security, mainly banking, it has been shown that pushing liability to the party best suited to affect it, provides the proper results. The European Union’s General Data Protection Regulation (GDPR), which is expected to be adopted by most EU member states in 2015, and will become enforceable starting 2017, provides just this framework for putting liability with the correct party.
For the February 2014 edition of Luminis' Conversing Worlds magazine, I wrote a column on Talent.
Toen ik bij Luminis aan boord kwam, was één van de eerste dingen die mij geleerd werd dat “Luminis talent niet dun uitsmeert.” Dit in tegenstelling tot klassieke consultancy, waarin ernstig groene medewerkers onder leiding van een overwerkte senior uren verbranden. Dat model werkt vooralsnog prima, dus waarom moeten we weer tegendraads zijn?
"Vooralsnog" is het toverwoord. Het klassieke model is commodity, en lijdt onder dalende tarieven, outsourcing, en erger nog, het helpt onze klanten niet meer. Als het er écht toe doet, kiezen gerenommeerde partijen niet voor de vertrouwde supplier, maar voor een klein clubje uit het Oosten des Lands. Dat is het resultaat van reputatie, track record, en ja, talent. En daarom zijn we tegendraads.
Talent? Ik geloof niet in rock stars of ninjas: het ergste wat je kan overkomen is alles wat je doet je zonder moeite lukt, terwijl je omgeving je vertelt hoe goed je bent.
"The first rule of Imposter Syndrome Club is that you aren’t good enough to be in it," las ik recent. Je kent het wel: je vindt dat je goed bezig bent, maar er knaagt een gevoel dat zegt "straks val je door de mand." Dát is het soort onrust dat ik in gedachten heb. Talent moet rijpen en groeien, en dat kan zeer doen. Je leert het meest van ervaringen waarin dingen niet gaan zoals je wil, waarin je onvrede voelt over je eigen handelen: je vraagt je af waarom je op een plateau zit, waarom het andere makkelijker af lijkt te gaan. Je kunt het groeiproces niet versnellen, maar je kunt het sturen.
Af en toe spring ik met een parachute uit een vliegtuig, en binnenkort doe ik mee met een obstakelrace met veel waterhindernissen, terwijl ik een hekel heb aan water. Ik doe die dingen niet omdat ik alles durf. Ik doe juist omdat ze eng zijn. De comfort zone heet zo om een goede reden: je bent er op je plek. Maar met die knagende onrust, is ook je comfort zone niet zo comfortabel. Daar kun je gebruik van maken: deel die onrust, ga op zoek naar uitdagingen, en neem daar ook collegae en anderen in mee. Het kan eng zijn om je skills te etaleren, maar het ergste dat je kan overkomen, is dat je iets nieuws leert.
Genoeg over jou, wat betekent dit voor je organisatie? Malcolm Gladwell--ook bekend van de 10.000 uur oefening om expert te worden--spreekt wel over The Talent Myth. Hij stelt dat talent bestaat, maar dat dit slechts een kleine factor voor succes is. Organisaties die talent aantrekken omdat het talent is, komen op termijn bedrogen uit. Werkelijk succes, zegt Gladwell, zit 'm in het vermogen om talent te laten groeien. Het tweede wat ik geleerd heb in mijn Luminis-tijd is dat alles kan, maar je alles zelf moet doen. Dat geldt ook voor de ontwikkeling van je eigen talent. We zijn geen GE, waar voor talent een uitgestippeld pad met afgemeten uitdagingen klaarligt. In plaats daarvan zijn we een organisatie waarin iedereen zijn eigen uitdaging kan zoeken.
Talent trekt talent aan. Maar iedereen die wel eens met Clickets (plastic knikkertjes met een magneetje daarin) gespeeld heeft, weet dat zelfs magnetisme tegenhouden kan worden als de Clicket stil ligt. Op de vloerbedekking heeft-ie gewoon een zetje nodig. Wij zullen zelf moeten zorgen voor die beweging. Luminis Arnhem heeft dan ook voor 2015 als overkoepelend thema in het businessplan "naar buiten!"
In welke fase van je carrière je je ook bevindt, de ontwikkeling van jouw en andermans talent is jouw taak. Wees niet bang, durf te delen, en maak er wat moois van!
In the October 2014 edition of Luminis' Conversing Worlds magazine, the 12,5 year anniversary edition, I wrote up my thoughts on my personal history.
Meer dan de helft van Luminis' bestaan heb ik meegemaakt: in 2007 binnengekomen bij wat toen nog Luminis iQ Products heette, en na wat omzwervingen nu als fellow aan de slag binnen Luminis Arnhem. Ik voel me daar het technisch geweten naast onze directeur, Jeroen. Het maakt mijn rol alleen maar uitdagender dat Jeroen ook een technische achtergrond heeft.
Als ik in de afgelopen jaren iets geleerd heb over Luminis, dan is het "alles kan, maar je moet het zelf doen." Er is een minimum aan structuur, maar zodra het gaat over je eigen ontwikkeling, die van je kern, en hoe je de meeste waarde kunt creeëren voor ons en onze klanten, ben je op jezelf aangewezen. Je krijgt die vrijheid, maar ook die verantwoordelijkheid. Niet iedereen kan daarmee omgaan.
Wat deze organisatie uniek maakt is de mix van focus: zowel het écht begrijpen van de wereld van de klant, als het serieus nemen van software engineering als totaaldiscipline.
In zeven jaar Luminis heb ik meegewerkt aan het binnenhalen en zien verdwijnen van grote opdrachten, heb code geschreven waarop ik trots ben en code waarvoor ik me schaam, heb prachtige en volkomen foute designs gemaakt, en heb op het laatste moment demo's gered en verpest. Wat mij het meest bijblijft is het gevoel van een bruisend team dat samen gaat voor het resultaat waar de klant écht wat aan heeft. Dáár krijg ik energie van!
Al twaalf-en-een-half jaar is Luminis in beweging. Voor mij is dat nu een verschuiving naar naar de "toegevoegde waarde." Software maken is geen "kunstje;" als dat het wel is, kan iemand anders dat zeker goedkoper dan jij. De reden dat je als developer in dit rijke deel van de wereld mag werken, is omdat je de klik kunt maken: gebruik die rechter hersenhelft!
In this December 2013 session at YOW in Melbourne, Kevlin Henny provides his claim that the SOLID principles are an open-ended set of guidelines, which aren’t as black-and-white as they seem. This talk works best when you have some experience applying (and maybe even explaining) the SOLID principles as they appear in code.
The best thing about this talk is how it shows that “it depends” is a real answer. It always depends, and there are no absolute truths; not in software engineering, not in life.
Are we all lost, then? Not really. Kevlin shows that there is value in the SOLID principles when used as a tool for learning, not as gospel.
This week's Mandatory Viewing is by Gerard Meszaros (the guy behind XUnit Test Patterns).
Gerard talks us through a topic most of us have run in to: you start out with a nice set of readable tests, and while your understanding of the system grows, you end up with lots of cruft in your test code. Your tests no longer describe what the system is doing, but how you need to use the components of your system to make something happen. Good as documentation, bad as specification.
This weeks Mandatory Viewing is Jim Weirich's Play by Play. Pluralsight has put up this talk shortly after Jim passed away, as a tribute.
In this almost 90-minute session, you can watch Jim take a problem, and iterate through a vast number of API designs, trying to get a feel for how his designs influence the user's happiness. Even if you don't know Ruby, you can follow the process of someone who thinks in code, and isn't afraid to say "let's try this, and see what it does."
The process of "trying out" what an API will look like before using it resonates well with me. I prefer to make my tests read as close as possible to the thinking process of an API user. Try to put yourself in the user's IDE, and try to feel what he's feeling while using your API; empathy with your user reduces the number of WTFs per minute drastically.
- "The maturity of frameworks can be shown in how good their error messages are."
- "We've prove the basic technology works. We can write a proxy. Now is the task of finding the right API that works well... if it's too complex to use, people won't like it."
- "We explored a couple paths that proved unfruitful. I think that's good in that you explore these things and you find that's not really what I wanted."
While writing my master's thesis, quite a few years ago, I wrote a trio of supporting papers to help me structure my thoughts. I recently came across them, and was surprised by how relevant they still are, even given what I've learned about software architecture in the past years. You might like them!
- Software architecture as a wicked problem deals with what makes software architecture hard, and how this overlaps with planning problems.
- An engineer is not a carpenter shows how software engineering is not a real engineering discipline. It contrasts software with other 'materials' we work with.
- Architecting adaptive software is basically a readable shape of my master' thesis. It deals with what adaptivity is, how you can't design it into systems, and what factors you can use to make a system exhibit adaptive behavior.
In one of my current projects, we have an XML message that looks a little like this.
<envelope> <message> <message-id>1</message-id> <keys>first-name<keys> <keys>last-name<keys> .... <values>Angelo<values> <values>van der Sijpt<values> </message> </envelope>
For testing purposes, I want to use XPath to get values from this, i.e. match
After some fiddling, I wound up with this slightly ugly contraption. Might be useful for you one day, though I hope you can be spared...
For the last months, I have coached some engineering teams in a large software-intensive organization. Some are running fine, some need extra work. A topic that regularly pops up is "how do I handle all these interruptions during my day?"
I define an interruption as anything that makes you context-switch away from the work you're doing for the team. So, questions from teammates are not interruptions. Neither is responding to email at the time of your choosing. Phone calls regarding different projects, or people showing up at your desk with unrelated questions are.
Scrum ask for a distraction-free environment, routing interruptions through the product owner. However, interruptions are a cultural phenomenon, not an organizational one. I will assume interruptions as a given of the current situation, channeling them in a productive manner, and slowly driving them out.
Company culture has its reasons for existence, and is hard to change. I believe there are three intertwined cultural components in interruptions,
- "it has always worked like this",
- there is no pushback,
- costs are hidden.
"It has always worked like this"
What and why
People have lost trust that goals and deadlines will be met. Especially with deadlines far into the future, there is a track record of slippage and work not getting done at all.
Managers have learned that anything not marked as urgent has a tendency to be left alone, and not get done. People get results by declaring emergency. And you know what? It works.
How to handle
This is ingrained into the organization, and hard to counter. Make commitments, and consistently meet them ("I'll get back to you next Friday"). Make promises, and keep them, and people will learn to accept you deferring work. We will get back to this. I promise.
There is no pushback
What and why
As engineers, we are inherently nice. When faced with the choice to either shine in front of someone present, of diligently work on for someone that isn't, we choose the former.
But, apart from an opportunity to please, is this work really that important?
How to handle
This is about personal choice and reputation: build up a track record of being dependable, and your pushback will be more gladly accepted.
Most interruptions are not that urgent at all, but our desire to please gets the better of us. To handle this well, we need to make a well-reasoned decision on the importance of the work before us.
Personal productivity methods provide some help in this. For instance, my personal favorite Getting Things Done forces you to make a choice for interruption over two minutes: Do, Delegate, Defer, Drop.
'Defer' can be part of your system, but is more effective as a team. If you decide to deal with your interruption demons, and to not be as available for the team as you could be, you're hurting the sprint result.
There are many ways to handle this. For instance,
- One of the teams has picked Tuesday as the 'interruptions day'. For every interruption, they ask "can this wait until next Tuesday?"
- The notion of core hours is written up best in chapter 10 of the Scrum Field Guide. In short, these are the hours every member is supposed to be available for the team. Outside of these hours your free to work the way you want, and any interruptions can be dealt with then.
Making promises and keeping them it not external behavior, it is part of who you are. Be dependable to both your clients, and to yourself.
If you make a promise to yourself to get some work done, treat it just the same as making a promise to someone else. If you need to break it, renegotiate. You can't be dependable without being authentic.
Costs are hidden
What and why
We all know context switches are expensive, but just how expensive? How can you explain to the angry manager at your desk "ah, yes, well, you know, you bothering me will take roughly 22.6 minutes of productivity away. From another project." Remember, as engineers we're way too polite for that.
How to handle
Transparency. Don't hide incoming work in some support process, taking away control from your product owner. Make it visible that incoming work hurts by keeping it on the same sprint board as the regular work, and show this pushes out planned work. Start counting interruptions, and show the correlation between interruptions and velocity.
I see interruptions as a part of the process, but we can make them exceptional. There are many ways to handle this, but picking just a few,
- use short sprints, so this work can be planned in a regular fashion, or
- spend time increasing product quality, so we end up with fewer emergencies.
Well, what does that mean for you? What can you, as an engineer, do?
Always wonder "what is the most valuable thing I can do at this moment in time" by consciously making the Do/Delegate/Defer/Drop decision.
Build up a track record of dependability: make promises, and keep them. Real emergencies are rare.
With your new aura of dependability, use it to manage interruptions even further. Ask people's input to be less disruptive: don't call, email. Don't visit your desk, create a well-written bug report. Use IRC if you must. This leaves you free to handle the interruptions regularly, but when you choose to.
Every interruption is a chance to shine. Shine only at your own terms.
Today I ran into a typical documentation problem.
- Organization uses mainly Word, but I don't use word.
- I want a solution that, while the documents are checked into subversion, is navigable in the browser.
- So, HTML is probably good, but I'm not going to write HTML by hand.
- I don't want any server-side code, and I also don't want to check in 'compiled' HTML.
<div id="content"> <!-- This div contains all content in Markdown. --> Showdown is - legend... - ...wait for it... - ...dary! <!-- End of div with markdown --> </div> <!-- This script translates markdown to readable HTML --> <script src="https://raw.github.com/coreyti/showdown/master/compressed/showdown.js" ></script> <script> var converter = new Showdown.converter(); var content = document.getElementById('content'); console.log(content); content.innerHTML = converter.makeHtml(content.innerHTML); </script>
which gives me a page that says the following,
- ...wait for it...
Isn't that an awesomely simple solution? Just type Markdown in the predefined div, save, commit, and enjoy.
Oh yeah, subversion
When using this solution in Subversion, remember to set the SVN mimetype to text/html, so the file can be viewed in the browser. You can do this using
svn propedit svn:mime-type <file>
First up: for those that don't know Devnology, you probably should. Visit devnology.nl and sign up for one of the upcoming event! Devnology's third Community Day took place at Vx Company in Baarn, in the best-furnished basement I ever visited. The community day is like a one-day conference with blocks of time carved out for different sessions and workshops; I'm only human and have not experienced them all, so I just picked the ones I was a part of.
You shall not pass
Not exactly an activity, but it is becoming a tradition for the Community Day: upon arrival, we find closed gates. After some 45 minutes, a security guard shows up, and once inside, things start heating up. Literally: it was roughly -15 C outside, with blue skies and some sunshine, making it not all that unpleasant. I have never skied, but I imagine this is what apres ski feels like.
Cloud9, or, why do I install all of this stuff
Mike de Boer is a developer at Cloud9, and gave a very nice introductory talk into the way Cloud9
"is to Eclipse as Google Docs is to MSOffice"
I liked the way he walked us through the various features and advantages of Cloud9, but I would have liked a more developer-oriented pitch. We were shown a quick demo of debugging and live changes, but nothing showed up that made me go "wow, time to ditch IntelliJ, Eclipse and TextMate at the same time!"
To inifinity, and beyond!
Never too shabby to take an engineer out of his comfort zone, into the land of mathematics, Felienne Hermans flipcharted her way from ancient Greece's Zeno's Paradox through the more modern notion of Hilbert's Hotel. Felt a bit like infinity-related excerpts from The Clockwork Universe, all compressed into roughly an hour. Also, I'm very charmed by the let's-have-a-flipchart-and-start-talking way of presenting.
On a slightly less related note, it was her birthday!
What I didn't like so much is that there was too much material for the reserved timeslot; probably a day's worth of material paraded by in some two hours, barely leaving time for hands-on Clojuring. I hope Martin finds the time reduce the amount of material (to, let's say an afternoon), then I'll be first in line again!
A programming language is a language too, right?
My day ended with Michel Rijnders doing some storytelling on how he started out as a philosopher by trade, and recently stumbled onto his books on the philosophy of language. There surely should be a link between human language and programming, right?
Well, no. As Michel expertly showed, even though human language is all about conveying meaning and alluding to another person's mental model of the world, the link to programming is pretty slim. Since in science there is no such thing as a failed experiment, I enjoyed this deviation from our usual programming-the-world view.
It later dawned on me that there may perhaps be more of a link between programming and classic poetry: both force you to take your ideas, and fit them into a strict harness. Any thoughts on that, Michel?
About the photos: you may know my photo gear is pretty retro, and I usually touch up the most annoying artifacts after scanning. This time, the weather (condensation along the bottom of the film strip) and the processing laboratory (numerous slanted, almost horizontal scratches) got the better of me.
Oh yes, it's such a first world problem. Yet, my Roomba has gotten stuck on my Ikea Poang chairs, about once every four or five runs.
The solution I came to was elevating the chair a little bit.
I bought some wooden wheels at Praxis (these are 60mm ones, intended to work as a wheel for, for instance, a storage box, or as helper wheels for a chair).
They are intended to work as wheels, so they are built to have an axle in them. We want to screw them onto out chair, but without the screw sticking out; so, I made sure the screw would recess using a counter sink.
Prepare the chair by pre-drilling some holes. I used a 3mm drill for this. Since the wheels' 60mm is just as wide as the chair's legs, there is no need to measure anything, just use your hand as a guide.
Then, attach the wheels. I used 3.5x20mm screws.
To top it all of, attach some felt protectors. I had some Ikea Praktisk protectors lying around.
There you go! The chair is now just high enough so the Roomba's bumper will hit the chair instead of bravely trying to climb over it. Mind you: this works fine on my hardwood floors, but you may need some additional height if you have carpet.
I gave a talk on ARL and the data evolution problem we find there at the Semantic Technology & Business conference, 26 September 2011.
Most of the codebases I work on are in Subversion, yet I like working with Git for the sheer joy of it. Below, you will find a short overview of my basic workflow. I do a lot from the command line, but not everything. For most operations Mac Git client Tower is the best solution you can find.
For the impatient reader, my workflow looks a little like this.
- One time only: clone repository.
git svn clone http://host.com/repository -Ttrunk -bbranches -ttags mylocalcopy
- Rebase: get all remote changes in.
git stash; git svn rebase; git stash pop
- Create awesome features, committing frequently with Tower.
- DCommit: push changes back to Subversion.
git stash; git svn dcommit; git stash pop
In a little more detail
Git works with a local history, which makes it quite distinct from Subversion. Also, Git assumes that you have a working trunk, with optional branches and tags; this means your repository should at least have a trunk directory.
Assuming your trunk is at http://host.com/repository, you can clone a remote Subversion repository to your system using
git svn clone http://host.com/repository -Ttrunk -bbranches -ttags mylocalcopy
This will copy over all revisions of the repository to your own system, placing a Git repository in your mylocalcopy directory. Since it goes through all revisions, this can take a while for repositories with a lot of commits. If Git somehow stops, and your local copy is still empty, go into the local repository directory, and execute git svn fetch.
Updating and committing
With your local working copy, you can use Tower to create your commits.
If your want to merge your copy with the remote Subversion repository, it's best to only do that after you have (locally) committed all changes. Then, use
git svn rebase
If you need to have some local uncommitted changes, use
git stash; git svn rebase; git stash pop
Pushing your changes to the remote Subversion repository is roughly like rebasing. Again, it's best to only do that with a 'clean working copy', that is, having committed all changes to your own Git repository.
git svn dcommit
Again, if you must, you can pad this command with git stash and git stash pop.
Remember that, when you commit, all your commits will get the timestamp of the dcommit.
How do I use Git with Subversion?
I have noticed that in Git, there are at least six ways to do anything you want. I have settled on a simplified way of doing things, and I know I am missing out on some awesomeness.
- I rarely use feature branches. I only use them when I am really working on two things at the same time, but not for every feature that comes along.
- When I do use feature branches, I tend to dcommit from the feature branch, instead of merging to my trunk first.
- Intellij's Git integration is pretty good, and it works together nicely with external tools. In Eclipse, I don't use any integration, but rely on command line tools and Tower only.
- For Eclipse-based projects, I use Git to keep track of the full working directory, instead of tracking individual Eclipse projects. This usually means that I clone some repository at the level above the Eclipse projects, and import the projects into Eclipse using the 'Import existing project' feature.