The true innovation of the sharing economy—or maybe it’s the startup economy, or entrepreneurship (or maybe just…capitalism)—is in the continued refinement of the perception of value, not necessarily in offering new services and developing new products, but in making them available for cheaper, because, as it turns out, when you don’t pay anyone a salary or give them benefits because they’re all subcontractors, and you don’t actually have to invest in any of the infrastructure upon which your business model depends, either directly or by paying taxes, your costs are a lot lower than everyone else.
John Allsop reflects on A Dao of Web Design, originally published 15 years ago, and the web today:
Perhaps those advocating this position, that progressive enhancement is old fashioned and quaint, that the Web is dead or dying because native apps are better, are right. Perhaps the idea of an application is the apotheosis of the very idea of human computer integration, and the Web, in falling short, well, in being different, is an evolutionary dead end.
But I continue to believe, just as the Web is not print, though it emerged in many ways from the medium of print, it is not just another application platform. It has its own genius, which we could call as I did all those years ago, adaptability.
This is a position that I will have a very hard time letting go of, but one that I find increasingly difficult to uphold with beginners or those who “weren’t around” at the time—and even, I think, people who were but never fully bought into the idea of “progressive enhancement”. The problem is still, I think, one of articulation. We need better explanations for what the web is, why it is important, and why anyone should care.
That, or, we simply need to step aside and let the upcoming generation come to their own conclusions in this regard, perhaps with better and more robust solutions than came before.
Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, and Shriram Krishnamurthi:
Acquiring the mechanical skills of programming—learning how to write instructions or expressions that the computer understands, getting to know what functions are available in the libraries, and similar activities—aren’t helping you much with real programming. To make such claims is like saying that a 10-year old who knows how to dribble can play on a professional soccer (football) team. It is also like claiming that memorizing a thousand words from the dictionary and a few rules from a grammar book teaches you a foreign language.
Programming is far more than the mechanics of language acquisition. It is about reading problem statements, extracting the important concepts. It is about figuring out what is really wanted. It is about exploring examples to strengthen your intuitive understanding of the problem. It is about organizing knowledge and it is about knowing what you don’t know yet. It is about filling those last few gaps. It is about making sure that you know how and why your code works, and that you and your readers will do so in the future. In short, it is really about solving problems systematically.
That’s the problem. Remove the tools, and we’ll recover speed.
I think the fear part is genuine, and I feel it myself.
I’ll confess to having absolutely no clue as to what is going on anymore with front-end web development, despite it being my first job, and a subject I’ve taught for years. It would be bad enough if the problem was just lack of experience, but what I all-too-often observe is a sneering attitude about perfectly common-sense ideas like “progressive enhancement.” And that includes web development instructors I’ve met.
Lately I’ve been trying to just ignore the whole thing. (Which isn’t really working.)
Rather than building devices that could enhance human memory and human knowledge for each individual, education technology has focused instead on devices that standardize the delivery of curriculum, that run students through various exercises and assessments, and that provide behavioral reinforcement.
We’ve got a lot of work to do, and I always think that the first task is to start to win over hearts and minds. And for that purpose, I’m glad Audrey Watters is around.
Audrey quotes at length from Vannevar Bush’s 1945 piece As We May Think and mentions Doug Engelbart’s 1962 report Augmenting Human Intellect: A Conceptual Framework—both of which should probably be required reading for anyone looking to contribute meaningfully to the role of computers in our lives. Bush’s piece is the more accessible of the two, by far, but Engelbart’s is worth a skim at least.
2) I will not read my paper line by line in a monotone without looking at the audience. I needn’t necessarily abide by some entertainment imperative, with jokes, anecdotes or flashy slides, but I will strive to maintain a certain compassion toward my captive audience.
Man, I thought tech conferences were awful. I’m pretty grateful, at the very least, that I haven’t actually felt obligated to attend any. I can’t imagine what it would be like to have to attend academic conferences. Sounds dreadful.
Ben Orlin on questions as a non-renewable resource:
Questions were not just things to answer; they were things to think about. Things to learn from. Giving the answer too quickly cut short the thinking and undermined the learning.
Good questions, in short, are a resource.
Solving a math problem means unfolding a mystery, enjoying the pleasure of discovery. But in every geometry lesson that year, I blundered along and blurted out the secret. With a few sentences, I’d manage to ruin the puzzle, ending the feast before it began, as definitively as if I’d spat in my students’ soup.
Math is a story, and I was giving my kids spoilers.
In the absence of community coordination, methodless enthusiasm will ensue—and caught somewhere in the Bermuda triangle of competing standards bodies, implementers, and OSS maintainers is the developer community. If we want our community-driven projects to become official, internationally recognized standards, we need to understand the impact of our governance processes as well as we understand the technical specifications for our technologies.
Even though I’ve been sitting/standing around defending the open and standards-based approach to governance over the web this week, I sometimes just take a step back and go “Ugh.”
Tim Parks goes into greater historical depth than I’ve seen elsewhere regarding the rising tides of available reading material (and attendant sense of overwhelm along with accompanying apocalyptic commentary).
If you’ve already got too much to read, skip to the end:
How to respond, then, to this now permanent condition of overproduction? With cheerful skepticism. With gratitude for those rare occasions when we come across a book that speaks to us personally. With forgiveness for those critics and publishers who induce us to waste our time with some literary flavor of the day. Absolutely without indignation, since none of this is anyone’s particular “fault.” Above all with a sense of wonder and curiosity at the general and implacable human determination (mine included) to fill endless space with dubious mental material when life is short and there are so many other things to be done.
And yet, although medieval Europeans had figured out how to build the same kinds of complex automata that people in other places had been designing and constructing for centuries, they did not stop believing in preternatural causes. They merely added ‘mechanical’ to the list of possible explanations.
The cane soon became a source of self-consciousness. “My eyeglasses would get compliments,” she told me, “but my cane would get a funny tilt of the head from people, as if they were thinking, ‘What’s wrong with you?’ ” For months, she was despondent. One thing that helped her recovery was finding a purple cane, while browsing online, to replace her drab, hospital-issued one. “I went from walking hunched down, wanting to hide, to actually being proud of it,” she said. Sometime afterward, she was shopping at J.Crew, her favorite store, and it occurred to her that her cane would look beautiful with the brand’s Kelly-green T-shirts. That led her to begin asking J.Crew, through e-mails, blog posts, and open letters published on Facebook and Twitter, if it would sell a fashionable cane—to broaden its customer reach and to help ease the stigma attached to assistive devices.
I’m still getting used to Apple as a fashion brand. But the idea of intersecting fashion, design, and technology with accessibility (beyond the usual software-level accessibility)? Fascinating.
Comfort noise is a fake hiss that your mobile phone, your VoIP phone, your corporate digital phone system, whatever, creates to mask the silences between talkspurts. That hiss isn’t actually coming down the line, from some analogue amplifier and hundreds of kilometres of copper; it’s created independently at each end by kindly computers.
Aditya Mukerjee digs into the problems inherent in having our digital representations of language governed by an elite group whose dues are expensive, comprised of predominantly white, predominantly male, predominantly American and Western European members.
Gayatri Chakravorty Spivak has written, ‘The subaltern cannot speak’. They are structurally prohibited from having any dialogue – even an unbalanced one – with the very powers that oppress them. Access to digital tools that respect our languages is crucial to communicating in the Internet age. The power to control the written word is the ability both to amplify voices and to silence them. Anyone with this power must wield it with caution.
I generally imagine that I prefer that there isn’t really a “standard” English, allowing for a flexibility in “correctness” which I feel is valuable to the me as a speaker and as a writer.
Of course, there really is a standard, it’s just harder to pin down because there’s no one organization or website devoted to it. But it’s still there, and structurally just as problematic as the Unicode Consortium, but without any of the transparency. David Foster Wallace captured it in Tense Present:
I don’t know whether anybody’s told you this or not, but when you’re in a college English class you’re basically studying a foreign dialect. This dialect is called ‘Standard Written English. … From talking with you and reading your essays, I’ve concluded that your own primary dialect is [one of three variants of SBE common to our region]. Now, let me spell something out in my official Teacher-voice: The SBE you’re fluent in is different from SWE in all kinds of important ways. Some of these differences are grammatical — for example, double negatives are OK in Standard Black English but not in SWE, and SBE and SWE conjugate certain verbs in totally different ways. Other differences have more to do with style — for instance, Standard Written English tends to use a lot more subordinate clauses in the early parts of sentences, and it sets off most of these early subordinates with commas, and, under SWE rules, writing that doesn’t do this is “choppy.” There are tons of differences like that. How much of this stuff do you already know?
I’m respecting you enough here to give you what I believe is the straight truth. In this country, SWE is perceived as the dialect of education and intelligence and power and prestige, and anybody of any race, ethnicity, religion, or gender who wants to succeed in American culture has got to be able to use SWE. This is How It Is. You can be glad about it or sad about it or deeply pissed off. You can believe it’s racist and unjust and decide right here and now to spend every waking minute of your adult life arguing against it, and maybe you should, but I’ll tell you something: If you ever want those arguments to get listened to and taken seriously, you’re going to have to communicate them in SWE, because SWE is the dialect our country uses to talk to itself. African Americans who’ve become successful and important in U.S. culture know this; that’s why King’s and X’s and Jackson’s speeches are in SWE, and why Morrison’s and Angelou’s and Baldwin’s and Wideman’s and West’s books are full of totally ass-kicking SWE, and why black judges and politicians and journalists and doctors and teachers communicate professionally in SWE. Some of these people grew up in homes and communities where SWE was the native dialect, and these black people had it much easier in school, but the ones who didn’t grow up with SWE realized at some point that they had to learn it and become able to write in it, and so they did. And [INSERT NAME HERE], you’re going to learn to use it, too, because I am going to make you.
Our data is out of our control. We might (wisely or unwisely) choose to publicly share our statuses, personal information, media and locations, or we might choose to only share this data with our friends. But it’s just an illusion of choice—however we share, we’re exposing ourselves to a wide audience. We have so much more to worry about than future employers seeing photos of us when we’ve had too much to drink.
Absent reading this review, I don’t think I would have gotten this update regarding Adria Richards:
Along these lines, one of the most captivating stories in the history of the Internet involves an incident that, happily, Ronson covers in depth. At a developer conference, two dudes, “Hank” and Alex, were cracking mildly off-color jokes to each other. Adria Richards, a woman sitting in front of them, photographed them and reported them to organizers. They explained the situation and were released. She tweeted and blogged about it. Hank was fired, then apologized in a public forum. The website of the company where Richards worked was forced down; then she was fired as well. Hank got a new job right away. Richards did not. Instead she spent a year fielding rape and murder threats.
But how, Ronson wonders, had Hank’s relationship with women developers changed since the incident? “Well,” Hank tells him. “We don’t have any female developers at the place I’m working at now. So.”
Of course. Of course “We don’t have any female developers at the place I’m working at now.”
As for Richards:
At the same time, Adria Richards opened up on Twitter. She had submitted 120 incidents of abuse to Twitter, she wrote — in a single week. They did nothing. After selling her furniture on Craigslist and moving out of her apartment, she found a therapist with experience in PTSD. She was still jobless, nearly two years later, but in February, she announced she had applied for a job — in the user safety and security department at Twitter. She didn’t sound as if she’d be holding her breath for an interview.
Choire sticks the landing:
The experience of women online is the great link between speech and violence, between offense and abuse. For women — and for all gender offenders, from gays to trans people — insult and the threat of murder are issued simultaneously. Like almost every other book, then, “So You’ve Been Publicly Shamed” would probably have been handled better by a woman.
In a continuing series, I’ll highlight newfound heroes of mine. Each hero is someone that I did not know existed until recently, whose achievements were not mentioned in any of my textbooks. This inaugural post of the series begins with Annie Jean Easley, an African-American computer programmer, mathematician, and aerospace engineer who worked at the National Advisory Committee for Aeronautics (NACA), which became the National Aeronautics and Space Administration (NASA) from 1955 - 1989.
To end a blog is to reject this ethos. It’s to recognize that these things we do on the internet do have endings. And having an end is a wonderful thing to have, because to have an ending is to have a beginning and middle too.
If you haven’t already read this, Mandy Brown has some thoughts on Medium, blogs, and Leonard Nimoy:
All of which is a long-winded way of saying that our core discomfort with Medium—with most of online publishing—is we can’t quite see how the money works no matter how hard we squint. And we’re naturally suspicious of the ways that money skews our relationships, with each other and with art.
Some of my favorites that made the list this year:1
Source Serif is a wonderful project from Adobe to create an open-source companion to Source Sans. So far, they’ve released an increased number of weights, but for a text face it’s still sorely missing italics. Source Serif draws inspiration from the work of Pierre Fournier, so it has a striking resemblance to Matthew Carter’s Charter, which for now I’d still recommend if you need a complete text face for the web.
Marr Sans is a flexible sans from Commercial Type, one of my favorite foundries. It has just enough quirky details to keep it from being dry, but feels extremely durable.
Cooper Hewitt: The Typeface is the new identity for the Cooper Hewitt Museum, which open-sourced the font. It’s gorgeous, has several weights, and a narrow frame which makes it extremely mobile-friendly. Since discovering it, I’ve found myself recommending it for use on résumés, where it performs admirably.
Druk is another new entry from Commercial Type, this one designed by Berton Hasebe. It comes in an unbelievable array of widths.
Input is a font for code from David Jonathan Ross at Font Bureau. If you haven’t already checked it out, it’s dazzling. And free for personal use in your text editor of choice, to boot.
It hasn’t escaped my attention that I’ve been more drawn in the recent past to fonts which available for free in some sense. Source Serif and Cooper Hewitt have both been open sourced, and Input is free for personal use. Lately on this very blog, I have been using a pair of open-source fonts through Google’s font service: Karla and Libre Baskerville. My desire for freely-available fonts has shifted as I’ve moved from client work (where I can include a budget for fonts on given projects) to teaching (where I’d like my students not to rack up any additional debt if possible). In professional design work, I still have some strong feeling about organizations which should be able to afford it being stingy about type-related expenditures, but at the same time I don’t think that the economics of fonts have caught up to the changes in typesetting brought on by the web. I’m not convinced of the staying power of the subscription model myself, in part because I feel uncomfortable about what it might mean for our ability to retain cultural artifacts from this period of the web. But I believe there are still types of businesses and individuals who probably would spend more on type and professional design services, if there was a better system in place.
My number one ask, which sounds nearly impossible, would be for a rise in fonts which are free for use on personal computers, but must be licensed for use in distributed materials. That would be an absolutely incredible change. In the meantime, I am intrigued that there were so many high-profile “free fonts” entering the market this year. ↩
Now that class is over, I finally have some time to look over what my students have been posting this past month. Melbity has some great notes from a lecture I gave.
These technology-oriented topics are always a challenge for me to address with design students because there is a very real question as to the extent to which designers should be familiar with the underlying technology that their work gets built in, even whether they should be able to build out designs themselves.
My medium of choice is clearly the web, although I have a great deal of respect for designers who work with other media. To me, there isn’t a large gap between designing for the web and designing for native apps, since they both tend to be screen-based and interactive.
At any rate, I was happy with how my approach went this time, and I’m glad Melbity was there to take notes.
Rebecca Rubenstein, in her introduction to the issue:
Maybe it’s because I fundamentally believe books have the power to alter us, to shape us, and yes, to even save our lives. Maybe it’s because I know the ways in which words can affect us—the ways sentences and stories can crawl inside of us and live there forever. While bookselling may not be known for its urgency—we’re not EMTs and no one is going to jail if they leave a bookstore empty-handed—a recommendation is still a gift. A book passing from one hand to another is not just an action; it’s an invitation to experience something transformative.
From LettError, a way of looking at the question that I see less frequently:
Artisans of the past — the predecessors of our designers’ guild — were rarely satisfied with tools as they found them in the shop. They always had the tendency to personalize their tools, to appropriate them by honing them, converting them or expanding them. The more specialized the work, the greater the demand for customized or individually made instruments. For instance, letter-cutters in the past thought up methods of working faster and more meticulously, and to that end they designed not only new fonts, but also counterpunches and other new tools. It must be possible to do something similar with software. After all, programming graphic programs is much too important to leave to programmers.
The designers of Smalltalk (Alan Kay, Dan Ingalls, and Adele Goldberg principally, and others), given the resources and freedom of Xerox PARC, worked actively to reverse this trend. Whereas a hodgepodge of cultural and technical realities constrained the way most other programming languages looked and felt, both Smalltalk the language and the system were written from the ground up to be so easy that a child could use them (hence the name). It was much more ambitious than just that, however. Kay saw Xerox PARC as being on the vanguard of a real revolution in human/computer interaction. In “The Early History of Smalltalk,” Alan Kay writes of this “Xerox PARC” vision of personal computing:
… the user interface would have to become a learning environment along the lines of Montessori and Bruner; and [the] needs for large scope, reduction in complexity, and end-user literacy would require that data and control structures be done away with in favor of a more biological scheme of protected universal cells interacting only through messages that could mimic desired behavior.
… we were actually trying for a for a qualitative paradigm shift in belief structures — a new Kuhnian paradigm in the same spirit as the invention of the printing press…
Software is developed mainly by engineers, not by designers. This makes the designer constrained by the engineers’ thoughts and ideas, not by his/her own. Programming gives the designer more control over his/her tools, and therefore over the design process. It allows one to follow the own workflow and think beyond the resources included in the software.
Probably you don’t need to know how to program to be a better designer. But it might help. And it won’t hurt, for sure.
“And it won’t hurt, for sure.”
I’m not so sure. And I think I know why these kinds of arguments back and forth about design and programming ring a bit of a bell and seem honestly somewhat tired.
I design mostly for the web, and I know its languages reasonably well. But I’m afraid that I will always find my own eye for design and taste informed by things like “build quality” and my design ideas strengthened and stunted by my knowledge of the possibilities of the medium. There’s a tradeoff: I know intimately what can be done and so am more likely to push the medium based on my own understanding of its limitations. I also always have the feeling that my creative instincts may be cut short by that same understanding.
The back and forth sounds old because it reminds me of questions like:
Do musicians need to be able to read music to be great musicians?
What is the role of technique in the work of a great painter?
When composing dance, do we need a set of already-established movements, or should we create new vocabularies?
Should we first learn and master the rules so that we may later break them? Or should we commence with open minds and experimental hearts, discovering and rediscovering as we go?
Look, “The Newsroom” was never going to be my favorite series, but I didn’t expect it to make my head blow off, all over again, after all these years of peaceful hate-watching. Don’s right, of course: a public debate about an alleged rape would be a nightmare. Anonymous accusations are risky and sometimes women lie about rape (Hell, people lie about everything). But on a show dedicated to fantasy journalism, Sorkin’s stand-in doesn’t lobby for more incisive coverage of sexual violence or for a responsible way to tell graphic stories without getting off on the horrible details or for innovative investigations that could pressure a corrupt, ass-covering system to do better. Instead, he argues that the idealistic thing to do is not to believe her story.
I had to stop watching “The Newsroom” at the beginning of the second season, which is rare for me. I’m glad she hung on long enough to deliver this gem. ↩
Input, from David Jonathan Ross at Font Bureau, also deserves a shout-out. I haven’t played with it much yet, but it breaks with convention and offers up a font intended for code that isn’t mono-spaced (fixed-width). It also comes in a variety of widths and weights, and has a serif companion style.
Perhaps most impressive of all, it is free for personal coding use, and custom builds can be made to suit your preferred styles of characters like zeros and curly braces.
Input came out of a conversation I had with some of my colleagues on this very topic. My boss, David Berlow asked: “Are monospaced fonts really the only solution for presenting computer code in a world with so much type technology?” Input was my response.
Mentioned in the previous post, but well worth its own link.
All it takes is a change in the perception of what an acceptable environment looks like. So, fellow shapes, remember it’s not about triangles vs squares, it’s about deciding what we want the world to look like, and settling for no less.
Jim Bumgardner has a nice little tutorial on writing code to cycle through colors. It’s a good read to be sure, but I also wanted to link to it because of how I got to it.
First I was revisiting Frank Chimero’s transcript of his talk What Screens Want1, which simultaneously thinking about changes I might make to a site I’m working on right now.
In the talk, Frank talks about how designing for screens can be like designing for what he calls “flux”:
So what does all of this mean? I think the grain of screens has been there since the beginning. It’s not tied to an aesthetic. Screens don’t care what the horses look like. They just want them to move. They want the horses to change.
Designing for screens is managing that change. To put a finer head on it, the grain of screens is something I call flux—
Flux is the capacity for change.
Yes, this could be animation, because that’s what I’ve been talking about up until now, but I think it’s a lot more, too. Flux is a generous definition. It encompasses many of the things we take for granted in the digital realm: structural changes, like customization, responsiveness, and variability.
The site which serves as the transcript for the talk, acts as a showcase for this very idea in a rather subtle way: certain portions of the text have a muted background color which, if you pay close attention to it, is changing slowly over time. I didn’t notice this the first time I read through the essay, which I think adds another level of interest to the effect.
This time though, I decided to peak under the hood to get a better idea of what exactly was going on, when I came across a file named fade.js, which begins:
// This JS code comes from Charlie Loyd [ @vruba ], who wrote a wonderful little
// missive here: http://basecase.org/2012/11/diadem/
// He’s given me permission to pilfer his codes. Thank you, Charlie.
// BEGIN COLOR-SCHNAZZIE-IFIER.
// Hi. This is not really production-quality code, just something I
// patched together for an experimental essay. But if you have
// questions, I’d be happy to answer them.
Further down, another interesting comment:
// This is an adaptation of the rainbow function described at
// http://basecase.org/env/on-rainbows (K is for @skimbrel, my
// buddy who had the central insight that sines work for this).
// We lighten and desaturate it a little.
In his post On rainbows, linked to in the second comment, Lloyd provides further interesting context:
(Halfway through writing this I double-checked that the techniques it shows were actually new – and, trying different search terms, found that Jim Bumgardner, a Processing expert, had already explained them perfectly well. With his encouragement I’m publishing this anyway.)
And that’s what led me to the piece I decided to link to above.
I absolutely love reading stuff like this. It’s nerdy in all the ways I like, touching on color theory and basic programming, and illustrated with visual examples of the output.2 And when I came across Bumgardner’s post, I got all excited about this story of how I came across it, and how the web I love is still out there: the one with all the thoughtful independent moving parts, the one where people are free to write what they want and make it available to anyone, where people leave breadcrumbs back to where their ideas come from.
But along the way, I got to thinking what was kicking around my head: that this was evidence of “the web I love” — as I was calling it in my head. And I don’t know why but that thought as it accumulated invited another darker thought, one about comfort and familiarity and safety. And it occurred to me that this idea of mine, of “the web I love” had more to do with comfort and familiarity and safety that I might have let on to myself at first. What did it mean that these writers were all white and male like me? I don’t want to presume anything about anyone, and I certainly don’t want to claim that these thoughts haven’t occurred to any of them, but it strikes me that we all fit a certain kind of profile, with earlier access to more of the tools of computing and the internet, and more leisure time at an earlier age which we could use to pursue our interests, and a physiological makeup that was always culturally considered compatible with obsessive study and possibly some quirky asocial tendencies.
I have more thinking to do on all of this, which I’ll leave for another day. But my trail of surfing ultimately led me to the Parable of the Polygons3, which I had never seen before and is simply awesome. Which got me relatively safely back to my good feelings about the web that I love. Give it a read-through, and I think you’ll see what I mean.
I’m really glad that this page is still available online. Andy Ihnatko:
Trust me, I know of which I speak. “Why haven’t you finished moving the redesigned site to its new servers” you ask? Because Voltaire died about a thousand years before he could tell me the above bit of wisdom and as a result, I had to pick it up on the streets.
Practical Example: Website design, specifically, taking a largish personal site and eliminating the unweildy and unworkable design you hate, transmogrifying and updating it, and putting it on a new server.
How about I create the pages in BBEdit, like I did years ago when I began Andy Ihnatko’s Colossal Waste Of Bandwidth? Great idea. But hey, now there are great visual content-design tools out there. Why not re-do it all in PageMill? Done.
But boy, now that GoLive is out, I think I’d be better off with a package that offers full-blown site management features. Well, I redid it once…what’s the harm in re-doing it again? I’ll be actually saving time in the long run.
Oh, yeah, Dreamweaver.
As best I can tell, this dates back to at least 2000.
And here I sit, having just jumped out of my browser to edit this entry in BBEdit.
Great new talk from Dan Meyer looking at what we can learn from video games. Lessons learned:
Video games get to the point.
The real world is overrated.
Video games have an open middle.
The middle grows more challenging and more interesting at the same time.
Instruction is visual, embedded in practice, and only as needed.
Video games lower the cost of failure.
As usual, Dan takes something that could go in a really simplistic direction and breathes a lot of life into it.
I’d also say it’s worth taking a look at this presentation as a model for lecturing and presenting ideas persuasively. He brings in his in-laws in a great way as testers of the games he investigates, using video and humor to great effect. He also has quick ways to involve the audience by having them make guesses at big numbers before he shows them on the slides, or tackle a quick mental math problem and share their strategy with a neighbor.
This past year, I have been simply blown away by the massive amount of quality work that Audrey Watters has published in the neighborhood of “ed-tech”.
If you don’t pay much attention to ed-tech, I think I can roughly sum up by describing it as a growing trend toward integrating more and more technology into the classroom. Sometimes it’s getting iPads into schools, sometimes launching new Facebook-like software platforms to track students and their progress, sometimes teachers sharing strategies with each other for better uses of video in their classrooms, sometimes working to incorporate more code into curriculum, sometimes building more standards across schools in America, sometimes “flipping” classrooms so that instruction happens in videos and teachers run practice workshops instead of introducing new content, sometimes transmitting lectures from big-name colleges and universities to far-flung corners of the country and globe, sometimes opening up internet access in parts of the world that haven’t had it, and so on. As you can imagine, it is a wide-ranging and complex area, touching on all of our complicated values around educating the next generation(s) as well as ourselves, and the role technology can or should play which forces a lot of questions about the role of technology in our own lives and work.
There are a lot of publications on the web dedicated to these concerns, not the least of which may be The Chronicle of Higher Education, which I expect many of you reading this will be familiar with. But I think Audrey Watters is the kind of person we need looking at education and technology critically, and she has done that fiercely and personally all year long, giving talks and writing at her site Hack Education.
With that, I want to endorse her picks for the end of the year:
Despite the mythology of “disruptive innovation,” the most innovative initiatives in education technology aren’t coming from startups. They aren’t incubated in Silicon Valley. They don’t emerge from the tech industry. In fact, many of the ed-tech startup ideas that are developed there are at best laughable, at worst horrifying.
The trend that she identifies each of these as more or less a part of is usually called The Indie Web, and it’s something that’s been kicking around for a bit and is near and dear to my own heart.
At Midnight Breakfast, we are trying to build something independent and as universally accessible as possible – both qualities that I think speak to the good of the web. We are also trying to make something beautiful, with great writing and great illustrations and photography, something that you can enjoy reading in the evening or on the weekend, or when you have some spare time at work.
We’ve had 20 years now with the web as an ever more present part of our lives, 20 years to see how the web affects publishing. Existing print newspapers and magazines have had to find ways to adapt, first by unbundling their archives and making them searchable and shareable on the web, and now by trying to find ways to keep their best work well-funded in a less-than-ideal scenario with ever-diminishing rates for ad-sales and drives for increased page views leading to “listicle”-style consumer-friendly content and eminently shareable “linkbait” headlines. I don’t even want to get into what’s been going on with books and e-readers and Amazon and bookstores and the iPad and Hachette, and whatever else. Suffice it to say that it’s been interesting.
But there’s also always been another strand in the growth of the web, an independent spirit fighting to build new things and bring them to new audiences, giving voice to people who might not have otherwise had a platform. My favorite early experiments were sites like Fray.com and Suck.com, and later The Morning News, still later The Bygone Bureau. And there were the independent personal sites and blogs, like Jason Kottke’s kottke.org and later Mandy Brown’s A Working Library. This part of the web feels to me, to this very day, like the “real” web – and I know how snobby or immature that must sound. But that part of the web is important not for its technology, but for its humanity. It’s the web that does like to share, but likes to have control over how things are shared, for things to be shared with individuals and real human beings, not centralized, owned, and sold to the highest bidder by 20-somthings in Silicon Valley. A lot of smart people have worked to build that part of the web, and I honestly still think it is strong, even if there are changes still needed. That spirit is important.
And I think it’s that spirit that Audrey Watters thinks is needed to have great voice in education, and I think it’s the spirit that Midnight Breakfast and so many other publications are trying to bring to the literary community.
I’ll let her take it from here:
I repeat this often: one of the most important initiatives in education technology is the University of Mary Washington’s Domain of One’s Own. The Domain of One’s Own gives students and faculty their own Web domain – not simply a bit of Web space at the university’s dot-edu, but their own domain. UMW facilitates the purchase of the domain; it helps with installation of WordPress (and other open source software); it offers support – technical and instructional; it hosts the site until graduation. And then, contrary to what happens with the LMS, the domain and its content is the student’s to take.
Maybe it sounds small, and maybe everyone is tired of hearing from people in the tech industry like me who seem to have opinions on how everyone else can do things better, but if I’m going to leave our readers with any seed for the new year, it’s this: Get yourself a domain name of your very own and tinker with it a bit just so you can have your own public space that you control that isn’t owned by someone else. Maybe it won’t take, and maybe hardly anyone will ever look there but you. But sometimes we need our own spaces.
For the past three years, I’ve used GitHub for hosting code projects, discovering bleeding edge tech, and collaborating with an engineering team. And it has been simply wonderful. In fact, it’s hard to imagine coding without GitHub. I rely on GitHub every single day.
However, despite how crucial GitHub is to the developer toolbox, I’m constantly wondering why the platform is limited to just code. It’s not a stretch to imagine the usefulness of a similar platform for non-developers - authors, teachers, students - though as much as I search, I can’t seem to find one. So I’m building it myself. I’m building a GitHub for everyone else.
I’ve been watching prose.io for a while now, and if you’re familiar with that project, Penflip will look familiar. I haven’t gotten to try it out yet, but I find the collaborative features very intriguing, reminiscent of Editorially.
We still need better tools, I’m glad people are still working to make them.
I’m posting this purely for the fact that it’s a fascinating website1, designed by Andrew McCarthy, who apparently also just helped finish up the redesign for the Django website.
There is something to this approach to design, perhaps an exploration of the medium itself, fascinating to see still going on even as the web has entered its third decade.
I may try to drum up some more examples later, but I’m starting to get excited that some new aesthetics are finally emerging, harkening back to some of my favorite elements of web design from the 90s. Maybe connected to the whole “Indie Web”2 thing?
At any rate, I’m hoping that the web finds ways to differentiate itself from the current aesthetics of iOS and Google’s efforts with Material Design. Don’t get me wrong—those aesthetics are both perfectly mature, well-adapted to their platforms and devices, and neutral enough that they adapt to a reasonably wide variety of applications.
But where platforms supporting graphical user interfaces (like Windows and Mac OS) prefer and benefit from a certain amount of uniformity3, the web has always struck as a home for greater levels of expression and experimentation. I have never heard of a “personal app”—but countless people around the world have a personal website.
I have no idea about the magazine, and I know of Marfa itself only by reputation. ↩
There’s a conservative/liberal sort of fork in UI design, in the sense of traditional/non-traditional. The conservatives see non-standard custom UI elements as wrong. Liberals see an app built using nothing other than standard system UI elements as boring, old-fashioned, stodgy.
At this point, nearly 4 years later, it feels more so to me that there is simply a new uniformity in mobile apps, as the experimentation stage died down. It’s that experimentation stage, though, that I feel is coming back to the web, for some reason. ↩
Maciej Cegłowski, worth a reread if you’ve already had the pleasure:
Recently, Quora raised $80 million in new funding at a $900 million valuation. Their stated reason for taking the money was to postpone having to think about revenue.
Quora walked in to an investor meeting, stated these facts as plainly as I have, and walked out with a check for eighty million dollars.
That’s the power of investor storytime.
Let me be clear: I don’t begrudge Quora this money. Anything that removes dollars from of the pockets of venture capitalists is something I favor.
But investor storytime is a cancer on our industry.
Because to make it work, to keep the edifice of promises from tumbling down, companies have to constantly find ways to make advertising more invasive and ubiquitous.
Investor storytime only works if you can argue that advertising in the future is going to be effective and lucrative in ways it just isn’t today. If the investors stop believing this, the money will dry up.
And that’s the motor destroying our online privacy. Investor storytime is why you’ll see facial detection at store shelves and checkout counters. Investor storytime is why garbage cans in London are talking to your cell phone, to find out who you are. (You’d think that a smartphone would have more self-respect than to talk to a random garbage can, but you’re wrong).
We’re addicted to ‘big data’ not because it’s effective now, but because we need it to tell better stories.
I especially enjoy Parts One and Two, but unfortunately from there it feels a little biased toward fairly violent games, with no real discussion of the shift. It feels pretty conspicuous. I mean, what year is this?
Scroll-based interaction is incredibly popular for interactive storytelling. There are many compelling reasons for this, yet scrolling is surprisingly nuanced and easy to break. So here are five rules for employing scrolling effectively.
I am personally a bit skeptical about these kinds of features. They generally strike me as more style than substance. These are definitely excellent technical notes on avoiding common pitfalls, with so-called “scroll-jacking,”1 but I haven’t yet seen one of these where I felt like the medium was truly appropriate to the story.
Yet for all the excitement, I can’t help but wish for more thoughtful discussion, both conceptually and practically. Often, I hear people refer to these designs as “intuitive” and “immersive,” but I find those words maddeningly vague. We — designers, developers, readers, writers, publishers — think we know what they mean in the abstract, but when we stoop down to the details, we end up disagreeing with each other on what the problems are and how they can be solved.
And without a common language for describing what works and what doesn’t, our work isn’t being pushed or explored further. I see example after example appearing online, that people have clearly spent time and thought into making, which cover the same ground and also share the same mistakes.
Experimentation is great if you’re learning. If you’re not, it’s just expensive.
Bostock is extremely well-skilled, and his work is invaluable. I also know that we need the kind of experimentation that is going on at the New York Times.2 It must still be early days; we have much to learn.3
I especially agree with Bostock in his second point, that it is preferable to attach events and animations to the browser’s standard scrolling behavior, rather than hijack it completely, as in the case of Apple’s Mac Pro brochure. Visually quite pleasing, but the interaction feels jarring. ↩
For what it’s worth, both Mike Bostock and Allen Tan are designers at the Times. ↩
I’ve still been having way too much fun playing around with my self-editing data URL “page”, and wound up finding the original proposal for data URLs from the IETF dated August 1998:
Some applications that use URLs also have a need to embed (small) media type data directly inline. This document defines a new URL scheme that would work like ‘immediate addressing’. The URLs are of the form:
The <mediatype> is an Internet media type specification (with optional parameters.) The appearance of “;base64” means that the data is encoded as base64. Without “;base64”, the data (as a sequence of octets) is represented using ASCII encoding for octets inside the range of safe URL characters and using the standard %xx hex encoding of URLs for octets outside that range. If <mediatype> is omitted, it defaults to text/plain;charset=US-ASCII. As a shorthand, “text/plain” can be omitted but the charset parameter supplied.
And apparently the idea goes back even further:
This idea was originally proposed August 1995. Some versions of the
data URL scheme have been used in the definition of VRML, and a
version has appeared as part of a proposal for embedded data in HTML.
Various changes have been made, based on requests, to elide the media
type, pack the indication of the base64 encoding more tightly, and
eliminate “quoted printable” as an encoding since it would not easily
yield valid URLs without additional %xx encoding, which itself is
sufficient. The “data” URL scheme is in use in VRML, new applications
of HTML, and various commercial products. It is being used for object
parameters in Java and ActiveX applications.
I kind of love looking at older documents like this about the inner workings of the web. I primarily see them used for inlining images, as outlined by Chris Coyier, but the idea of encoding an entire page is just somehow still tripping me out.
I stumbled upon this today, as a group that Duck Duck Go1 had given money to. Much of this is either over my head or simply too much work, but these are clearly people who are taking the idea of owning their own space on the web very seriously.2
If you’ve got a little bit of facility with the console in your browser, I recommend taking the 10-15 minutes just to try out An unhosted editor, in which you’ll sort of bootstrap your own in-browser code editor. It’s a little bit trippy, to be honest.
However intriguing and incredible all this is, I think the site and the ongoing project serve as pretty clear reminders that there is a tradeoff between ownership over something and the amount of work that is required. Their idea is to interact with Facebook and Twitter over their APIs through a “puppet” account, which again is fascinating. But clearly an amount of work that most people can’t be expected to undertake, even if they knew how.
That’s, I think, the potentially ugly opposing force. Many of these open source projects and indie web types of things are actually much less accessible to the majority of people, and so can be problematic in a different sort of way than the centralized and controlled platforms. I tend to think that it’s the kind of work that can eventually be made more accessible over time, as everything needs to start somewhere. But when our big changes finally happen from small seeds, we end up with a new elite in charge made up entirely of people who had the resources, leisure time, and privilege, to create the new world that we’re all living in.4
Duck Duck Go is a search engine that I tried off and on for the past few years, which I now use full time, especially thanks to their recent integration in Safari and iOS. Highly recommended if you’ve never looked into it. ↩
“Today, ladies and gentlemen, you’ll notice that there’s an objective and a Do Now in front of you, but I need to say this: if anyone wants to talk about what happened last night, whether now or one-on-one, I’m available to do this.” They didn’t seem to understand. “After last night’s lack of indictment of Darren Wilson and the murder of Michael Brown, maybe you have something to say or get off your chest, and if so, I’ve dedicated this time right now for you to say your piece.”
With that, I opened the floor with little moderation from me. Students asked what happened, and I presented what facts I knew. Students felt angered and hurt by the typical timeline of things: cop shoots child of color and the cop gets away with it. One student asked for my personal opinion, and I couldn’t help but tell them as carefully as I could how outraged I was, and why it matters that I would do this.
They’re so used to their lives not mattering in the eyes of authorities.
If you went to art school or consider yourself a UI designer already, you will likely find this guide some combination of a.) boring, b.) wrong, and c.) irritating. That’s fine. All your criticisms are right. Close the tab, move along.
I immediately went from reading with a grain of salt to disarming and digging in. Well done.
The methods for laying text over an image strike me as especially useful.