January 4, 2017

How to (actually) "learn programming"

Filed under: bitcoin, software development, tmsr — Benjamin Vulpes @ 3:03 a.m.

More or less the same way you learn any other very complicated craft with oodles of knowledge both formalized and oral: by finding the most strict and knowledgeable master you can, and slaving for him as best you can for as long as you can tolerate it. Proper apprenticeships are an unlikely model in the States, as everyone with 9 months of React under their belt expects 140KUSD per annum and a title, but you wanted to know how to actually learn programming.

Most masters that you'll find in the wild world of shartups are neither particularly masterful nor particularly willing to entertain your novicehood. This manifests in "industry" (to the extent that building javascript webapps might be called industry) as "software engineers" (lacking the "senior" honorific) train "junior software engineers" inserted into their organization by the Diversity Machine. This is not the sort of master you'll learn much useful from, regardless of what you think of the type of master you'd like to learn from.

Since you'll not find anyone to beat 40 years of slapdash hacks into your head on the shartup circuit, you're stuck learning from the cruel, busy, cryptic and reluctant peers of The Republic, who won't be particularly useful on the curricula front.


- Applied Cryptography, Bruce Schneier (first edition)
Read the first edition, with the red BLUE (Red is the bullshit version. Kudos to mod6 for the catch.) cover. Schneier redacted all of the actual goodies so that he might land a job with people who find that kind of behavior appealing and not appalling.

- Common Lisp the Language, 2nd Edition, Guy Steele
The peers have largely settled on Common Lisp as a programming lingua franca. It's an entirely adequate language, featuring ~everything you'll find in "modern" programming languages like PHP or Python. While I'm not convinced that one can "learn programming" in any other way than by building things and practicing constantly and with a relentless eye towards self improvement, reading this book won't hurt you (too much).


- generate and secure GPG keys

This is the single most important task for anyone who intends to join The Republic. You must learn what it means to generate keys securely, how to use them securely, enumerate the kinds of threat you wish to secure your keys against, and then effect a system that tends to all of these needs.

You also must establish and practice your backup and restoration process for these keys. Everything dies, including computer hardware, so you must ensure that you do not fail to maintain access to the anchor to reality and key to the door of The Republic's forum.

- set up and operate a virtual server

While I cannot recommend that you make a permanent home in a virtualized server on someone else's hardware, you need a persistent Linux box that can do...things. It more or less doesn't matter which Linux you settle on if you're reading this for advice, but you should operate under the assumptions that a) you'll be relegating the machine to the dustbin at some point and b) you'll probably want to change Linux distributions as well.

- set up an IRC bouncer

If you have the remotest dream of anyone in The Most Serene Republic of Bitcoin giving a shit about you and your problems, you'll quickly discover the importance of maintaining your own connection to the forum and not annoying the peers by reconnecting constantly. Establishing and maintaining a persistent and robust IRC connection will teach you much about the Linux and IRC client you've chosen to operate.

- set up a blog

Recount your travails in "learning programming". Muse in public. Offer your thoughts that others may know them and contradict you. This is as close as you'll get to "having a master", so have opinions, be ready to defend them, and prepare to accept that you're wrong. Don't neglect your comment system and for the love of all that is holy don't outsource it.

- operate a server

There are many ways to get into operating your own hardware, and many tradeoffs to make in the hardware procurement project. Migrating from virtualized servers to your own metal in a datacenter somewhere will illuminate all sorts of dusty corners in your head where the advocates of feeding the world with McDonald's hide the assumptions they programmed you with as a child. This project will acquaint you with the engineering tradeoffs with which programmering as a career is rife.

- run a "The Real Bitcoin" node

Once you've grown into your own hardware and have at least 5GB of RAM and 200GB of disk to spare, consider operating a TRB node. TRB is downright finicky in constrained and virtualized environments, and you're on a course to digital literacy and self-sufficiency.


- extend Diana Coman's "foxybot", a bot for Mircea Popescu's MMORPG Eulora

MP runs an MMORPG and encourages players to automate their activities in it. Diana Coman, the current project lead/developer (do forgive the possibly-insulting title) maintains and extends both the game's codebase and that of it's dominant bot "foxybot". The link to foxybot above has a list of features the playerbase would like to see implemented.

Working in this environment will teach you about the wonders of C++ and Crystalspace; a programming language with which one must be conversant but that is not particularly...good, and a "game development engine" that isn't as loathsome as other engines respectively.

- (re)implement V

V is a hard crypto source distribution tool. Reimplementing a working V will demonstrate that you understand a foundational building block of our world.

- make a Lamport Parachute

Stan says it all, go read it.

- operate an IRC bot

trinque and I (but mostly trinque) have put some work into a Common Lisp IRC bot. Stand one up and keep it up.

- build and host a log viewer

If you're already operating an IRC bot (and when we've made it so easy for you to do so, not doing so begins to look a bit lazy), you may contribute to The Republic's own form of distributed redundancy: many different implementations of core functionality -- in this case, log viewers. Public logs civilize the chaos and noise of IRC, and cross-referencing upgrades logs to Talmudic stature. phf hosts the canonical logs at , I host a set at , and Framedragger hosts a set at .

This project will acquaint you with the miseries of building wwwtronic software. Implementing search and cross-referencing will teach you even more.


There is no point to "learn programming" if you're just going to further the works of evil by battling for the empire's hegemony with JavaScript and "mobile apps". If all you desire is a good job and enough money to pay for beer, food, and box for your meat so that you may attract a girl in her late thirties who's looking to settle, go sign up for your local code school, capitalize on their placement program, and settle down to devour your brain elsewhere. The Republic will continue to fight without you, ensuring access to strong cryptography (see: FUCKGOATS, the only high-grade entropy source on the market in the whole world), and a Bitcoin implementation that keeps pestilential currency-fascists and -devaluators at bay.

The reading section is currently woefully incomplete, indicative of both the reading I've done in the field and what I consider the utility of various "programming books". Suggestions welcome.

December 27, 2016

"How To Learn Programming"

Filed under: philosophy, programming, software development — Benjamin Vulpes @ 8:28 a.m.

Don't worry, I'm not going to give you any useful pointers. As a matter of fact, if you read this and walk away completely deflated and as though I've torn the inspiration to make a career change from your hands, we'll both be better off. I'll have fewer kids working for starvation wages and reinventing all of JavaScript every year and a half saturating my marketplace, and you'll be spared all sorts of personal tribulations and the crippling insanity that comes of intimacy with modern computing1. You shouldn't learn Python, you shouldn't learn Ruby (let's be real, you were just going to use Rails anyways), you definitely shouldn't learn JavaScript, and if you value the power of independent thought, also avoid products of the cube farm like Java and Swift/Objective-C2.

That said, since you're here and reading this nonsense, I'm going to go out on a limb and assume that you're not of the brain-dead crab-pot postulators of such pap as "there's no such thing as wide variance in human ability at a broad spectrum of tasks, all competence can be traced to genetic predisposition to competence at the task in question", are in fact a smart cookie willing to work your ass off for meagre dopaminergic rewards in the short term, and have a passing interest and some amount of learned skills that make you, as the numbskull linked above might say, "good at computers". Perhaps someone even told you that you might eke great piles of loot food credits from Capital's demand for more Labor to turn its ever-multiplying cranks of complexity. None of that will be enough: you'll need tenacity, a solid dose of ability to pretend that some things are true while pretending some other completely contradictory things are also true, a healthy disregard for public opinion (if you want to preserve your sanity), and a historical bent so that you may at least work to build the leg up on the rest of the market that is grokking the historical reasons behind why Android sucks so fucking bad. Moreover, you'll need practical reasons to soldier through the nonsense, like a fearless leader or the promise of untold wealth. Pure fun or "joy of it" will get you precisely nowhere, when it wears off and the slog proper begins.

"Programming", as the touchscreen-using public knows it, consists nearly entirely of building soi-disant "apps". This domain decomposes into an enumerable set of subdomains: browser user-interface development ("webapps", "javascript apps", "single-page applications", "mobile-first websites", and more recently "React", "Elm", "Vapor" and other friends. Same old show, new cast of characters.), server-side development ("microservices", "Django", "Docker", "Rails", "Node.js"), and (generously) "operations" ("DevOps", "NoOps", "ContainerOps"), and "native" development ("React Native", "Atom", "Balompagus", "Objective-C", "Swift", "Java")3. In-browser UI development reduces to "some JavaScript that draws shit on the page and talks to backend services", "backend services" reduces to "a thin layer of glue code that handles HTTP requests, retrieves data from a data store (typically SQL4, although there are very fun mistakes to make in this domain as well), and then transmutes that data into a response to the original HTTP request". "Native development" reduces to "figuring out how the fuck to imitate AirBnB's latest nightmare of complexity with the fewest lines of code and most adherence to how Apple or Google want you to build that kind of "experience" (gag) on their respective platforms.

Building "experiences" (barf)5 in the browser is an unmitigated disaster. You can't really understand how miserably fucked up building "apps" (as the public call anything with buttons on a screen at which they can paw with their grease-coated sausages6) in the browser is, you must understand where browsers came from and how they evolved into the shitshow you haul around in your pocket every day. In the beginning, there was plain text. Then, some people structured that text, wrapping paragraphs in <p> (known as "markup") to indicate that they should be styled as paragraphs, using other markup for eg lists, and so on and so forth7.

So take a step back and think about what your browser is trying to do under the hood: take some text, marked up with various tags, apply some visual rules to it with CSS, and then execute COMPLETELY ARBITRARY CODE to oh you know maybe rearrange the ordering of lists, or replace your cursor with a spinning dick blowing loads whenever it draws over the character V or T, or oh I know pre-validate that you put a credit card number into that form input so that we can save ourselves a round trip to our servers from the user's browser. That's totally a good reason to shoehorn an impossibly bad programming language into the browser, mhm.

Fast-forward a decade. Web sites are passé, and people want at the very least "responsive" websites, and ideally "mobile first experiences" (drink). This means that the website that once needed to render nicely and quickly at 600x800 and maybe a few larger monitors now needs to look good on the 15" Macbook Pro Retina monitor (a monster of pixels, owned by everyone involved in "experience design", that no actual customer owns and yet whose pixel-count drives ~all design considerations in the industry), the Nexus Pucksell with its trapezoidal screen, the iPhone 7XXL with nearly the same number of pixels as the 15" Macbook Pro Retina but that uses an entirely different user interface built around poking at buttons drawn on a screen rather than pointing and clicking with a mouse and typing with a keyboard, and miscellaneous 4-year old Android phones that the User Experience expert in question has around from his last job but doesn't use any more except when he wants to make his devs lives miserable. On small budget projects.

It gets worse: because everyone involved in web dev was fathered by the kinds of neutered not-thinkers that women in America must settle for and the women aren't smart enough to have the children sired by actual quality sperm that didn't come from their meal ticket out of some perverse adherence to the local traditions of Beer, Monogamy, and Sports, your website won't even feel slow to the average cell phone user until you serve over a dozen megabytes (That's rather a lot of bytes. The last time I checked, the CH homepage clocked in at a trivial sub-30 kilobytes [kilo, mega, do recall the SI names for orders of magnitude from your elementary education, don't you?]) of uncompressed JS and CSS that you probably aren't even serving to clients anyways. This means that there is zero pressure for people to build lightweight websites anymore, which pretty much guarantees that nobody building websites is even going to think about the repercussions (either performance or security!) of pulling in a library to trim whitespace off the beginning of strings. For example.

So that's "the frontend". A soul-killing hodgepodge of 3 "programming languages" (HTML for the text and its structure, CSS for an approximation of what it should look like, and JS for responding to user input like clicks and taps), executed all together by "The Browser" and differently by each browser. Obviously (or perhaps not obviously, I have no idea how much you know about how your apps work), these things running in the browser have to get their data from somewhere. That somewhere are your "backend services".

"Backend services" are, well, Wordpress in other languages. All systems evolve until they can send mail, and all programming languages evolve until they reproduce a Wordpress-like Content Management System (CMS). They're of variously (I hesitate to say "great", but maybe) "less heinous than the alternatives" (will suffice) quality, where "the alternatives" are the poor Drupal framework. Which you don't want to use, unless you have some amount of cause to use it and Jesus fuck surely there's a programming language you like more than PHP, right? Anyways, all that WP does is respond to requests for web pages with data from its database wrapped up in HTML. Maybe with some CSS. JS if you hired someone to make a modal or a carousel or some other Web 3.14 inanity happen to your website (which, don't. It'll break in a year and you'll be out another 3k. Plus they always look horrible). Funny story, I recently heard tell of a Wordpress plugin that pulled in an entire JavaScript framework to render a modal. Does it serve that JS on all pages or just the ones where its modal is active? Was the JS compressed? GOOD QUESTIONS, KIDDO.

I digress. Backend services respond to HTTP requests with data from the database. Sometimes they write data to the database. Sometimes they poke other backend systems. Generally, though, they're "the pure function of the server: glorious, stateless, and without any user-interface cruft", to paraphrase a man who taught me much. Backend systems, eschewing as they do the complexities of managing user interface state are far simpler systems to build and maintain that clicky-pointy-tappy UI's. Every language used in serious force for web development (some languages are not, believe what you may) features one of these mega frameworks in which other web devs have encoded their knowledge and best practices around building web applications. In Python one has Django, and in Ruby one has Rails. PHP has Yii, and I hear that PHP is an entirely adequate(ly) object-oriented programming language these days, so who knows maybe that's a thing the budding webdev might consider using, except for how don't.

Finally, there is the wasteland of "native application development". Once upon a time this was just "software development" after the phrase was bastardized to mean "software running on consumer desktops" and not "missile control systems", but the poor notion's been degraded even further to now signify any drawing of buttons on a screen by any old monkey anywhere. It's not like she cares about the degradation, she's just happy to be with someone who can pay for dinner and maybe a kid. "You're a software developer too! Don't listen to what those mean guys on the internet say, I love you and that's all that matters." Not that any of us could afford to educate a kid in this America, but fuck I digress again.

"Native App Dev" is a glorious term for reading through Apple and Google's documentation for how to build list views and swiping image carousels on iOS and Android and then copying code from Stack Overflow that you can sorta-kinda beat into doing what the designer on staff has demanded, all naïve of the actual engineering constraints in play. Building UIs in this fashion is mostly configuration, if you can keep your designers on a tight leash and force them to design things that fit into the (admittedly inane) touch paradigms of the two platforms.

Building apps in the browser sucks, but at least a million people are out there sorta willing to lend the five brain cells they have left after playing in rock bands through their forties in resolving your problems with JavaScript and `undefined'. Plus it's sorta this weak Lisp Machine knockoff where you can kinda look at ("inspect") the web page and Chrome or Firefox will make a cursory attempt to tell you why the p tag doesn't have the spacing you might want it to have. Moreover, once your "browser apps" grow in complexity to the point where you're maintaining user state and redrawing things based on what you want to show them...well, at first you'll want to use a framework like React to get you 90% of the boilerplate you don't know that you need yet but the odds are solid that in two or three years you'll look back at whatever you built and curse fluently at the time you wasted using bloated toolkits. Nevermind that the only reason you built "apps" of the scale you did on your first dabblings is because all of the hard stuff was handled for you. Granted, you'll have a better notion of how badly those libraries handled it and with opinions and assholes in hand you'll set out to begin another cycle of the Great JavaScript Circle of Life where someone who's only ever built UI's in the browser with JS finally gets fed up with it all and decides to write the One True Frontend Framework to Solve All Everything That Sucks About Building UIs In The Browser. Hopefully you read this first and realize that contributing to the Circle of JS/Life is not actually a worthy use of your time.

Building apps for iOS and Android is no better; you're stuck in the hell that is Other People's IDEs (Integrated Development Environment, handling compilation and code browsing and documentation and autocompletion and all the civilized niceties that Emacs provides poorly and only after extreme customization). Java has the advantage of a compiler over JavaScript8, but it's pretty easily tricked and moreover Java the language is pretty repulsive to the aesthetically-inclined. Objective-C has over Java has a C-like syntax? And is tangentially related to the Legacy of Steve Jobs? I suppose there's the hot new jam of Swift, and if you bite off that mountain of lurking WTF let me know what you find; all I learned in building Swift apps is that Apple apparently can't ship a compiler/IDE toolchain that doesn't require regular restarts9.

Backend systems are the refuge of those inadequate for the task of building end-user interfaces, and of people who recoil from the notion of building user interfaces (due to aforementioned insanity in technology choices). "Nope", one guy demonstratedly capable of building iOS and Android systems told me one time, "I simply will not build mobile applications." One wonders "why" for approximately five minutes, and then realizes that a high-performing backend dev in the SV-driven market pulls down just about as much as your typical high-performing frontend dev -- and without all of the insanity of user interface development (the "technology" choices are bad enough, but have you ever met a self-styled user-experience visionary?). This odd confluence of the derpiest and some of the sharper knives (in the sense that they have a stiff internal resistance to dumb shit, and a nose for finding it) breeds such monstrosities as Rails (grasshopper, explaining the ways in which Rails development wears on my will to live is beyond the scope of this piece, but let it suffice to say that I recently saw the following line of code and recoiled in horror: `authenticates-with-sorcery!'). Most appealingly, you get to work with sane-ish linuxes, and your systems remain stable while the suckers working in the browser and on iOS and Android chase every single operating system release with if not actual slavering excitement then full-bore Stockholm syndrome.

However, despair not. There are other "kinds" of "programming"! You could configure SharePoint websites for any of a million small/medium business with SharePoint websites. This will also make you a Microsoft stoolie, and unfit for civilized company. Also you'll be stuck with the same problems. SalesForce idem, although I don't think you have to buy the Microsoft party line to hack on their shit. You could get a plain-Jane analyst job, and then apply your not-insignificant thinker to solving business problems with technology, and nobody would tell you which languages you had to use. Shit, I know options houses with Excel spreadsheets that take an hour to run, and that's after hand-optimization.

Or you could move into the "embedded" space, and compile programs in whatever language to run on tiny devices in the "Internet of Things". Making software to run on smaller and smaller chips is an excellent career bet for the next decade, as corporate focus moves from the "one person, one device, one chip" model to a rat-king of devices infiltrating every aspect of American life.

If you're really ambitious, you could even shoot for an actual auto-didact's "Computer Science" degree. Just keep in mind that if you do this correctly, you'll not "learn programming" nearly as much as you'll learn about theories of computation. Two litmus tests: if you write a single line of code in your first semester, you got scammed; and if you don't know the lambda calculus and its combinators by the end of the third semester, idem.

The point that I apparently need at least one more glass of wine to get out is that you'll need to pick a project to "learn programming" in the context of. If you know statistics, pick up a book on what's called "Machine Learning", but is really just applied statistics, and start working through the exercises. If you've already got the hang of physics and the related mathematics, consider writing a simple physics simulator. If you want to see the results of your work, consider building a simulation of agents controlling physical objects in a simulator someone else built, like in the game development program Unity.

If you have utterly no fucking idea where to start, consider buying a Raspberry Pi and beating it into a robotic cat feeder. You might not ever finish the project, but you'll either learn how to learn how to work with the linux shitshow or you'll flunk out of computer-related auto-didactery 101.

The point is that one cannot "learn programming" quite so simply as asking "how do I learn programming". Rather, you must have a place you're trying to get to, whether that's making more money than you otherwise might or solving problems that are just too tedious to solve properly in Excel.

In any case, you really shouldn't "learn programming". The world doesn't need more programmers, and you don't need to wreck your head on the shoals of Von Neumann and his band of merry mongols. If you insist, though, I'll be here. Don't expect me to help, though, this ain't Stack Overflow.

  1. You either go batshit nuts and move to a cave, hook up with crypto-terrorists like The Bitcoin Foundation, or retreat into the soft embrace of Dunning-Krugerism telling yourself that "this is fine, man", surrounding yourself with other middling schmucks all the while merrily self-deluding yourselves into the kind of complacency that makes crushing a six pack in front of The Game so damned seductive for the typical American male. []
  2. The joke's on you if you think you're "just going to learn Swift...". There are a few decades of internal Apple API's that'll need rewriting before you don't accidentally absorb some Objective-C on your way to Swift "mastery". The joke's on me if you write your app in Objective-C and then come crying for help. []
  3. Pop quiz: which of this paragraphs parenthetical appellations did I make up? []
  4. Another of those hoary old languages that refuse to die because they're just so miserably adequate for the tasks to which humans put them. []
  5. The degree and earnestness with which people lie to themselves about their work ("user experience visionary", "Full-stack JavaScript engineer", "Rails guru"), is a strong proxy for the quality of the work they produce, its utility to the market, and in precisely which market they hawk their goods and services. []
  6. True story: the extremely large iPhones exist to capture the bariatric market. []
  7. There was a whole pile of crazy around documents referring to other documents, semantically organizing the world's knowledge, it got as full-bore tin-woman as you might expect with very bad results). Eventually, some well-intentioned asshole (funny how it always goes the same exact way. Every. Single. Time. ) in search of a solution to some specific problems but not smart enough to evaluate the implication of his design on the world then applied a healthy dose of Cascading Style Sheets (henceforth CSS) to the Hypertext Markup Language (henceforth HTML) and turned browsers, which until that point had not done a whole lot to style the web pages shown to users into a so-bad-it-came-out-the-other-side-into-amusingly shitty knockoff of InDesign (This is unfair to CSS. Inline styling was a thing, and picking the right resolution of technical accuracy for a good read is hard.). To compound all of the above, someone decreed (this would have been at the height of the Browser Wars) that their company needed something other than the Java applets that were a) in use and b) causing all sorts of havoc at the time (It may sound crazy to someone "learning programming" in the dusk of 2016, but people loaded and executed Java code into their browsers "once upon a time". That you don't find this a laughably bad idea on its face should start to give you an idea of the yawning chasm of crazy you'll have to eat if you want to "learn programming".) . Thus was born JavaScript, a crime for which many are liable and will likely only discharge their debts by providing entertaining deaths []
  8. Now that you mention it there are comtranspilers for JavaScript that turn a sane-ish language into valid JavaScript. There's PureScript, TypeScript, ParenScript, and ClojureScript JUST TO NAME A FEW. One could in theory compare the different $LANG_THAT_TRANSPILES_TO_JS but by the time you got through the top three entries on your list the JavaScript "community" would have moved onto another one and showstopper bugs would emerge in the three you tested and do you begin to understand why I think that opening your wrists might be a better use of time than "learning programming"? []
  9. It's so bad that I actually got into the habit of preemptively quitting Xcode because its auto-completion backend would crash and never notify me. The easy and humiliating solution was simply to restart Xcode whenever it failed to complete symbols I was typing. And yes, before you ask, I'd quit Xcode regularly expecting the autocompletion framework to work again on reboot only to discover that I'd mistyped the symbol prelude. Such is the cost of shitty tooling. []

December 22, 2016

veh.lisp genesis.vpatch

Filed under: bitcoin, software development, tmsr — Benjamin Vulpes @ 11:06 p.m.

At phf's prodding, I present in this post a genesis vpatch and corresponding signature for my Common Lisp implementation of asciilifeform's V. In case you've forgotten, V is a hard-crypto software source distribution tool that gives The Republic delightfully hard guarantees about who has endorsed what changes to a software project. Details are here: V-tronics 101.

It is useful for code-savvy folks in The Republic to reimplement basic tools like this. Multiple implementations of an ambiguous specification provide far more value than the "many eyes" mantra of open source advocates. For example, an implementation in Python might burn the eyes of a Perl hacker, and the Perl be entirely inscrutable to a man who's never touched it before, and even were such a man to sit down and learn Python for the purpose of auditing another's V implementation, it is in no way obvious that the time cost of his learning the language combined with the risk that he misses details in the audit is a better resource expenditure than simply implementing the tool again in his language of choice. Multiple implementations provide the Republic defense in depth, in stark contrast to the Silicon Valley software monocultures, and demonstrate to the Peers that the authors understand the goals and subtleties of the project in question.

phf did not just prod me to post my implementation, however. The charges are serious, so allow me to quote in full:

phf: ben_vulpes: this subthread since your response to my original statement is one example of what i'm talking about. in this case none of the v implementations are on btcbase, because nobody wants to sign own hacks, because the cost of failure is too high.

For an example of just how this notion that "the cost of failure is too high" came to be:

mircea_popescu: to put it in you'll have to sign it. if it turns out later to have a hole, people will negrate you.

To contextualize phf's comment properly: the man set up a spiffy loggotron (the one I cite here constantly, actually) and then hared off to the desert for a few weeks without ironing some stability issues out first, which left us without logs for a bothersome amount of time. While kicking a process over may be acceptable (in some contexts, on the deficit budget the Republic operates), that style of process monitoring and uptime insurance only works if someone is available to restart the process in question whenever it goes down. Which it wasn't, and for which he was roundly scolded upon his return.

So yes, the reputational costs of operating critical infrastructure (in phf's case, the canonical log of the Forum's dealings) for The Republic and then letting that infrastructure fail is rather steep. Note, however, that he has since ironed the stability issues out and the whole episode largely left behind. No negative ratings were issued as a result, that's for damn sure.

The brouhaha that kicked off my rewrite of my V implementation is barely worth going into1 but for four details: the discovered bug was not a hole, but required that an operator attempt an action actively harmful to their own health; the implementation's author fixed the problem in short order; was already a member in good standing of the #trilema Web of Trust; and the issue was discovered by members of the Republic and not leveraged into an attack.

Much of the Republic's otherwise incomprehensible-to-outsiders behavior may be chalked up to precisely this sort of "trust building exercise", and there is no way to build a nation of men but this way. A strong reputation buttresses a sapper against charges of treason, leaving space for the WoT to entertain the notion that the sapper is not treasonous but has merely made a mistake. Moreover, fear of failure's repercussions must always be evaluated and mitigated in the same way that one performs security analyses: "What are the downsides here? How might these changes fuck my wotmates? How pissed could they reasonably get at me for hosing them thusly? How would I respond to allegations of treason?" Not that anyone's on the stand for such, but one must entertain the gedankenexperiment.

So, in the spirit of:

phf: but the reason i made those statements yesterday is because i think that like saying things in log is an opportunity to be corrected, so does posting a vpatch, it could be a learning experience. instead the mindset seems to be
a111: Logged on 2016-02-20 22:45 phf: "i, ordained computron mircea lifeform, in the year of our republic 1932, having devoted three months to the verification of checksums, with my heart pure and my press clean, sit down to transcribe vee patch ascii_limits_hack, signed by the illustrious asciilifeform, mircea_popescu, ben_vulpes and all saints."

I am proud to publish a genesis vpatch for my own V implementation in Common Lisp. It is a "harem-v" (which is to say a V implementation that this individual uses in the privacy of his own workshop, and may not suit your needs or even desires), but I daresay that it is correct in the important places. Even if it is wildly incorrect in those important places, it demonstrates quite completely that The Republic outperforms classic "open source" communities by reproducing and spot-checking each other's work instead of pretending to read it and only ever actually grepping for hateful words in order to be a most respectably-woke Github contributor. I also offer it in the spirit of the above log line: to seek correction and feedback on best practices from peers more competent with the parens than myself.


Updated 12/27/2016 with hashes_and_errors.vpatch


One simplification from and that I made in this implementation is that I iterate naïvely through all of the signatures (until one is found that verifies) when confirming that a patch has a signature from wot-members, rather than sorting the list of patches and signatures and making assumptions about patch/signature naming. This slows `press' operations down significantly, but `flow' calculations complete nearly instantly.

Enjoy! If you find anything heinously wrong, do let me know. I shan't be falling on my sword over it, but I will fix it if you can show me that it is in fact broken.

Updated 01/04/2017 with 2017_cleanup.vpatch


  1. tl;dr: a V implementation was willing to press to heads for which it had no signatures. Its author has since remedied that. []

December 2, 2016

Software Maintenance Costs and Depreciation Schedules

Filed under: management, software development — Benjamin Vulpes @ 8:47 p.m.

From Mircea Popescu's "The planetary model of software development.":

mircea_popescu: ...It's starting to look to me as if software is in the same situation, every distinct item gravitating against the Sun of practice.
diana_coman: ...FWIW as experience: this structure of the bot which is quite sane still is actually at least the 2nd total re-write basically. Not because I started with an insane structure but because the first one got totally messed up when confronted with practice basically.


There exists a closest-safe distance from origin, given by the specific resistance of materials (ie, hardware, programming languages and other meta-tools), wherein the software presents as a molten core surrounded by a solid crust. Should such a planet move closer to the origin, through a rationalization of its design, it will thereby implode.

Implode, shred and smear into an accretion disk due to gravitational forces, whatever. Given a static problem, the solution to which delivers some utility, and a software proggy designed and built to solve that problem, the little proggy experiences gravitational stresses from edge case handling alone: at a certain point in the boiling-off-of-pointless complexity, software authors encounter the brick wall of edge cases and malformed inputs, and this forms the complexity floor for a piece of software handling static business requirements. The ideal programmer burns off absolutely everything not essential to the functioning of the thing, rejects inputs aggressively, and writes the whole system to do one single thing reliably. I imagine that Phuctor is a good example of this.

Now this proggy should run forever! Untouched! It's not as though the mathematics underpinning key factorization are changing anywhere near as quickly as a Javascript developer's taste in frameworks weathervanes around, are they?

Nevertheless, Stan finds himself condemned to rewrite the thing yearly. Because the problem changes, or someone wants to ship him SSH keys, or because the DB can't handle replication, or whatever. Merely that Stan is going to change things entails some amount of system rewrite, as it was designed to fit its task extremely narrowly, and will not readily embrace "just a little change". Mutating software to respond to a changing problem is expensive, time-consuming, and must be planned for. A program that must respond to feedback from its use in practice survives not just work performed upon it from going around and around its star of value at high speeds, but also (to stretch the analogy) highly local gravitational gradients as its managers and operators reshape it in realtime.

The model (incorrect though they all may be) does provide some utility in explaining observed phenomenon. On the docket today: depreciation schedules and maintenance costs.

A very rough treatment of depreciation: "the speed at which your shit rots". For freight-hauling trucks and other hardware like lathes and mills, this is largely a function of their time-in-use. As we crank bar stock through the lathe, its working parts experience all sorts of vibratory and static loads, the components deform (perhaps permanently), shit gets into the bearings, and entropy wins like she always does.

Surely software doesn't depreciate, though! It's code! It does the same thing every time! It can't wear out, and it certainly cannot break down! If it breaks, it can't be said to ever have actually worked in the first place, and that's a different case. But working software does not depreciate! Paul Niquette expounds at length on this idea in Software Does Not Fail(archived).

Fine, your shit doesn't stink. It doesn't rot, it doesn't change, but the world in which you wrote it does. Possibly you need to handle CRLF where before you expected to only ever catch a CR or LF; or your boss procured new colleagues for you; or the only person who understood that system went sessile and ate his brain. The list of changes that could affect the relationship between the business and the code it bought is not enumerable.

The business manager then has 2 extremely difficult questions to wrestle with when handling software: on what schedule shall I depreciate this asset, and how do I estimate its maintenance costs?

A few key factors drive software depreciation schedules: team turnover rate and employee makeup, correctness-ensuring infrastructure, and the speed at which the business demands that system respond to changing business requirements.

A high team turnover rate, coupled with an employee makeup such that new staffers take quite some time to get up to speed on system internals and aren't terrifically productive for even longer drives depreciation rates up. Managers work against this with various strategies: "we're a Perl shop", "we only hire the top 1% of applicants", "we only hire Stanford grads", but the actually useful strategies are impossibly difficult for a shop hiring commodity labor: hire smart people that get along with your existing team, who understand the programming languages and environments in which your systems are written, and for the love of fuck keep turnover low and by extension context switching as well. A mid-size team with a high turnover rate can result in systems that nobody in your organization understands how to work on. In this particular nightmare, it may be simpler to approximate the depreciation schedule as a function of how long it takes to turn your workforce over -- if nobody's around who understood how it worked in the first place, it may be worth the org's money to rewrite the thing (especially if your hiring pipeline is comprised of people chasing trends in javascript development...). Obviously, a team of three working on a given system for a decade or more will be negligibly bitten by this dynamic.

Correctness-ensuring infrastructure comes in many different forms: QA teams, a robust testing philosophy and the discipline to follow it, and type systems. Well, "type systems"; in practice this is a framework for ensuring the human-cogs all mesh together neatly like Java, Objective-C or and of course an overbearing IDE to combat the tendency of a mean-reverting population of programmers to write code that doesn't work. Smaller teams whose members think more highly of themselves may run on "strongly typed" languages like Haskell, and use that as a large part of their correctness-ensuring infrastructure. This infrastructure drives down both maintenance costs and the depreciation rate, by dropping developers into an environment in which even if they can't say for sure whether or not what they wrote will work, they have tooling to identify if they broke other parts of the system in pursuit of today's task.

Turnover and meta-tooling aside, possibly the largest factor in depreciation schedule is the speed at which the system must change to accomodate feedback from practice, and how much change it must swallow. If large new features need shipping on a regular basis, you must hire high-quality programmers and pay them well over a long term in order to keep that pace up. Should you be attempting to cram a shitload of changes through your system, you will probably need to staff out sideways, not necessarily hiring the biggest and most expensive guns, but smearing human horsepower across the feature attack surface to drag the whole thing together. The peasant vector field from Seven Samurai, if you will.

A system designed and implemented by smart folks, with a limited functional attack surface, for an organization with low turnover that doesn't need many changes after delivery or where the scope of changes is fairly well-constrained will have a very long depreciation schedule, possibly in excess of five years. A system not so much designed as stapled together ad-hoc out of JavaScript frameworks by a couple of Code Academy dropouts that accidentally finds itself with some venture capital and customers may need replacing within two years as the costs of maintenance balloon and pace of feature delivery retards.

Maintenance cost analysis concerns itself with a related problem: given an established software system that needs regular deployments (let us roll with the server-side application development model), and has an ongoing stream of new features and bug fixes applied to it, how does one get a handle on the relationship between the rate of feature development and bug patching and the dollars spent?

Maintenance encompasses (among other things): developing new features and completing tasks, fixing existing bugs, ensuring that the team has not introduced any new bugs or broken existing functionality, pushing the new code out to the servers where customers will use it, and mutation of database systems in support of new code (to cherry-pick just a few topics). Costs and project velocity are a function of team size and composition, the aforementioned correctness-ensuring harnesses (automated testing, mature programming languages), the cost of deployment, and development cycle time.

Team makeup and composition factors that drive maintenance costs: skill distribution among the team members; how quickly team members can perform tasks; how rigorously they test or decline to test their own work; with how much discipline team members handle the automated or manual testing process. Intelligent people who get shit done quickly and care about writing tests to cover both new bugs and new features are very valuable but only leading or a part of teams that share those values. Average schmucks who just want to collect a paycheck and do the least amount of work without getting fired will slow the team down, and set the foundation for an extremely sharp mean reversion if included in high-functioning teams. You can imagine the glacial speed with which entire teams comprised of such people might move, and the concomittant costs.

Cost and speed are also intimately affected by the correctness-harnessing infrastructure: test suites not only protect against shipping bad software, but they also accelerate developer cycle time and confidence in system correctness. Some GUI-heavy apps built with no consideration for automated testing may impose a 2 minute (or more!) cycle time, as a developer: compiles the app, boots it, logs in (is the person in question smart enough to hard code login credentials during testing? Rigorous enough to ensure those changes are never committed to the codebase?), and pokes it into the questionable state. Running the whole test suite might take 30 seconds, and running just a single test might be as fast as 10 seconds (it's entirely reasonable to shoot for 1 test per second, hardly a single backend system needs that, but ever since the last round of upgrades I've despaired of getting any sort of performance out of Xcode). A high rate of feedback with the system-under-hack is necessary not just to maintain the precious high-concentration state but also to keep dipshits from thinking they're excused to watch a cat video and to forget that their app is compiling, not to mention trashing all of the valuable cognitive state that goes into debugging years of spackled-on complexity.

Maintenance also entails delivering the damn code, the costs of which also vary strongly as a function of institutional history and bent. Some organizations (still!) deploy code manually to disastrous result: see the case of Knight Capital, where a manual update process missed a server and burned through nearly half a billion dollars. Other organizations deploy code to production servers automatically, every time that the canonical repository is updated and the tests pass. A given organization's site on the continuum from entirely-manual to entirely-automated code deployment is a solid data point that hints at the organization's ability (or at least one of its software arms) to effectively expend capital to reduce maintenance costs (but only if they haven't gone unnecessarily overboard with the capex. Not everyone needs a Chaos Gorilla). This dynamic explains quite a bit of Heroku and other IaaS companies success: "Instead of writing and maintaining a pile of shell scripts to mutate a set of servers in a data center somewhere (capital expenditure), we can rent servers from Amazon/Heroku/Google and pay a monthly fee atop that for their deployment abstractions!" For some orgs, this works really well. The breakeven point on Heroku is somewhere around 16.6KUSD/mo - 12 for the operations engineer, and 4 for the colocation fees. You can buy a lot of Heroku for 15K/mo. Whether you can rent a virtual server in a virtual server that Paul Graham is renting from Jeff Bezos without wanting to open your wrists and barf into the nearest liberal arts graduate's mouth is however, beyond the scope of today's piece.

We all want to build software that lasts, and runs unperturbed in a closet for decades. Failing that, we would like to be able to respond to changes in the real world as expediently as possible. The best solution is obviously the trivial one: the smartest person possible, working with his favorite tools, in a domain he knows intimately. Should we need to go to war with the budget and army we have and not the ones we wish that we had, some guidelines emerge: keep teams small and turnover low; invest (appropriately!) in correctness-ensuring infrastructure, be that tests or type systems; and automate deployments to the extent possible.

October 7, 2016

Least-Effort Signups in Django

Filed under: django, magic, python, software development — Benjamin Vulpes @ 12:57 a.m.

It's wwwtronix hell weekmonthyeareternity at Cascadian Hacker!

Alonso complains:

No automatic login1 :(

So, have some Django-flavored Python that'll create, save and log your new users in2 in one swell foop:

class SignupForm(UserCreationForm):

    def save(self, commit=True):
        u = super(UserCreationForm, self).save(commit=False)
        u.is_active = True
        return u

class Signup(CreateView):
    model = User
    template_name = 'SOME_TEMPLATE'
    form_class = SignupForm
    success_url = "/"

    def form_valid(self, form):
        u =
        login(self.request, u)
        return HttpResponseRedirect("/")

There. That wasn't so hard now, was it?

  1. For context: the Django web framework provides some pre-baked forms and validators to handle new user signups. However, none of them go so far as to set the session cookie and tell the framework to "log the user in". That is what Alonso is complaining about. []
  2. The "activate my account" meme is just another step in your funnel in which potential customers will (at some statistical rate) fall out and hit the floor. You didn't need it, you're welcome. []
Older Posts »