I’m fascinated by the web. It’s so big and vast that it’s hard to pin down much about it. To me, web books are an expression of what the web can be when it’s at its best: access to ideas.
Here are just a few of the web books that I enjoy and share:
Keith covers a bit of the underlying history of the web itself, and makes one of the best cases I know of for taking a progressive approach to building for the web.
Butterick’s overview is probably the most comprehensive approach to typography available on the web. I find myself recommending Typography in Ten Minutes to anyone interested in improving their typographic skills, and often share Butterick’s advice on Résumés.
Trương’s treatment focuses more on typography as it is employed on the web. His section on Practicing Typography has some great examples of web-based typesetting.
For anyone out there who might embark upon the publication of a web book, some notes on best practices:
Provide friendly links to content. Typically, these books allow linking to each chapter separately. Many also provide mechanisms for providing a link to a particular paragraph.
Jeremy Keith does it best by making highlighted selections linkable, e.g. “This is not a handbook. It’s more like a history book.” I believe this could be further improved for readability by using a + for spaces, as opposed to %20.
Allow users to keep reading by providing a prominent link to the next chapter. Almost all of the examples above do this well, and make these links very prominent.
Butterick’s approach in Practical Typography provides less prominent links to the previous and next chapters, in a manner reminiscent to most online documentation, such as for Django. The link to the next chapter should be easier to spot and jump to.
Keith includes a prominent link to the next chapter, along with a full table of content following it. This is a useful approach, which works particularly well on smaller screen sizes. It could be improved by providing the reader with an indication of which chapter they have just finished in the full table of contents, as well as greater differentiation between the prominent next chapter and the full contents. Chimero does the best job of the latter in The Shape of Design, but could still improve by giving an indication of which chapter is currently being viewed.
Incorporate interactive examples and illustrations. This can include manipulable diagrams or code playgrounds.
Of the above examples, these are only successfully executed by Haverbeke in Eloquent JavaScript. His topic has an advantage, as web browsers natively execute code written in JavaScript. The interface could be improved with greater discoverability. As it stands, a reader must click on the code to discover that it can be edited, and further actions are hidden behind keyboard shortcuts listed only in a hamburger menu. A straightforward toolbar would go a long way, and could be done tastefully without disrupting the flow of reading.
Most writing is published inertly, and invites fairly passive consumption—not as passive as television perhaps, but less active than getting one’s hands dirty. This is where the medium of web books has a major advantage that I’d like to be explored more deeply.
At the moment, these remain painfully difficult to build, so it is no wonder there are so few good examples of this on the web. Even Bret Victor’s essay on Learnable Programming misses the opportunity to use interactive examples, instead opting for short videos of interaction. This keeps the reader in a similar passive mode and fails to fully engage as actively as it could.
Vi Hart and Nicki Case offer more active examples in their beautifully prepared Parable of the Polygons, which functions more as an essay than as a book.
Pick a random adjective from the dictionary, and write a sketch that tries to convey that word by changing the position, size, and rotation of a single rectangle. Do this for a couple of words, and ask friends to guess the word-image combinations.
There is an opportunity here to provide an interactive space right on the page, even if rudimentary.
Christopher Browne offers a nice history, and also some good thoughts on where spreadsheets can be improved.
It’s fascinating to find out that there were alternative systems to using capital letters for columns and numerals for rows, e.g. “A1” and “B3”. The primary alternative was used by Microsoft MultiPlan and others, with rows and columns numbered with “R” or “C” marking the distinction, e.g. “R1C1” and “R2C3”.
I largely agree with his points about offering increased aesthetic formatting capabilities into spreadsheet software, but I disagree to the extent that visual distinctions like cell shading, proper alignment of numerals, distinguishing headers, etc. add meaning and offer non-trivial improvements. My concern is that this is only the case when they are used for that purpose, and spreadsheet software does little to make this easy for users.
It’s also clear that spreadsheets are frequently used as quasi-databases, which exposes much of their fragility. I have yet to see a convincing proposal for making relational databases with all of their benefits more accessible to users in the way that spreadsheets are. How can we introduce the integrity of these types of systems in ways that help users take advantage of them, without introducing a steep learning curve?
I think of spreadsheet software as the most accessible modern programming environment. Projects like Jupyter offer a very different take on a programmable document, but I long for something that improves on the power of spreadsheet software to give people a way to interact with information and make sense of it.
Amy Papaelias reflects on a Teaching Type panel she participated in a couple of months back. Her key takeaways:
Surprise! There are many ways to teach typography.
Reading matters. Well, maybe sometimes. Not always. But it should. Most of the time.
Screen typography is just typography.
“Good” typography is a loaded term.
Typography education doesn’t end with one class.
Lots of great discussion in here, which resonates with me. Personally, I believe typography is up there with persuasive speaking and computer programming in terms of contemporary subjects not particularly well-served currently, which I believe should be more broadly infused into everything we teach and learn. Papelias writes:
Typography is embedded in every design class where language is represented in visual form. We teach typography all the time: when we teach web design, or senior thesis, or branding, or design history, or interaction design or even introductory classes taken prior to the actual Typography course.
I just think that scoping this to design education is way too small. There’s a ton in the design world that is inexorably bound up with capitalism and elitism. The part of good typography that has to do with communicating better–everyone needs access to that, not just companies which can afford to invest in improving their branding and footprint, and not just publishers who control access to information and compete with each other on the qualities of their experiences over the quality of their content.
How do we broaden access to the basic skills of typography that have to do with improving one’s ability to communicate?
Statistics is quickly becoming the most important and multi-disciplinary field of mathematics. According to the American Statistical Association, “statistician” is one of the top ten fastest-growing occupations and statistics is one of the fastest-growing bachelor degrees. Statistical literacy is essential to our data driven society. Yet, for all the increased importance and demand for statistical competence, the pedagogical approaches in statistics have barely changed. Using Mike Bostock’s data visualization software, D3.js, Seeing Theory visualizes the fundamental concepts covered in an introductory college statistics or Advanced Placement statistics class. Students are encouraged to use Seeing Theory as an additional resource to their textbook, professor and peers.
[This] research tells us what ought to have always been clear: that faculty, and the ability for faculty to form meaningful relationships with students, are the most important part of a satisfying education. Check it out.
Of course, I am biased and like reading things which confirm my bias. Nevertheless, this type of thing really should be obvious. It’s gratifying to have some well-researched data to back up what we already know.
Some solid tips from FutureLearn on taking notes. A couple stand out especially to me:
Don’t just transcribe
Whether you’re sat watching a video, or in a lecture hall, it’s easy to just frantically try and scribble down everything the speaker is saying. The result is usually smudged, nonsensical notes and a sore hand. You end up focusing on transcribing instead of learning. Try and filter what the speaker is saying, listen for key points, or jot down things to research further.
I’d add that it’s quite easy to disengage and go into a passive mode if you try to get things down word for word. Passing what you’re hearing and seeing through your own filter nudges your mind to think through what it’s absorbing for itself.
This advice at the end is great as well:
Play to your strengths – prefer learning by listening? Record yourself reading your notes. Prefer visual learning? Get artistic and draw out concepts and illustrations. There’s no one way to take good notes, it’s about what suits you.
I’m also skeptical that the 21st century has made very many skills obsolete. Sure, calculators can multiply for us. But a fluency with multiplication and familiarity with its structure builds essential knowledge that students need to engage in more challenging problem solving. It’s easy for those with knowledge to underestimate the extent to which that knowledge makes higher-order reasoning possible, called the “curse of knowledge” by psychologists. I’m a big believer in content. The more people know, the better they are able to reason about new situations in the future.
This post by Lisa M. Lane was a must-read for me. I could easily just quote the whole thing, but that’s what the link is for.
If I had to pick one nugget it would be this:
The fact that such interfaces prevent branching, distributed, or complex learning is considered to be a feature, not a bug. All information is “chunked” for easy understanding and assessment.
I love thinking about the interplay between the web and education, and it’s clear that things are murky.
When designing for the web, or “digital” or “apps” or whatever, we are encouraged to make things easy to use. The trouble is that most rewarding work is not “easy”—that is, it doesn’t follow a straight-forward progression of steps.
In thinking about designing tools that coexist with people, then, I don’t believe it is our job to design a tool that makes peoples’ jobs easier. Instead, I think it’s our job to design tools which make fussy or mundane parts of the job easier, possibly to the point of going away altogether, so that the job that’s left to the person is largely concentrated on what that person does well and finds rewarding. (This can obviously vary from person to person.)
If we treat learning as the job of the student, and facilitating that learning as the job of teachers, how can we design tools for learning and facilitation that shave away the tedious bits and focus on the juicy bits?
I can always count on Audrey Watters to join words together that get at something that’s been brewing somewhere beyond my own language motors:
And I’ll say something that people might find upsetting or offensive: I’m not sure that “solid research” would necessarily impress me. I don’t actually care about “assessments” or “effectiveness.” That is, they’re not interesting to me as a scholar. My concerns about “what works” about ed-tech have little to do with whether or not there’s something we can measure or something we can bottle as an “outcome”; indeed, I fear that what we can measure often shapes our discussions of “effect.”
Arguments around “outcomes,” “assessments,” and “effectiveness” bother me because they tend to be reductive and self-serving. They’re reductive because they require us to place measuring sticks on students that don’t take into account their perspective. And they’re self-serving because anything that you choose to measure can be optimized for, providing an easy escape to the question of whether we’re measuring the right thing: “Sure we are, just look at how much {thing we are measuring} has improved!”
At the same time, I do have a bias toward practical, hands-on education. What is practical? strikes me as a tough question, but I still personally prefer it to How do we measure effectiveness?
Audrey finishes this talk with a real doozy that will likely ring in my ears for a long time:
My concern, I think – and I repeat this a lot – is that we have substituted surveillance for care. Our institutions do not care for students. They do not care for faculty. They have not rewarded those in it for their compassion, for their relationships, for their humanity.
Adding a technology layer on top of a dispassionate and exploitative institution does not solve anyone’s problems. Indeed, it creates new ones. What do we lose, for example, if we more heavily surveil students? What do we lose when we more heavily surveil faculty? The goal with technology-enhanced efforts, I fear, is compliance not compassion and not curiosity. So sure, some “quantitative metrics” might tick upward. But at what cost? And at what cost to whom?
I will share some of my workflow and style choices with you but a lot of that is just how I present, not how you should present. I’ll offer only two words of advice that I think every single presenter should take seriously.
To preface that advice, I’d like you to make a list of what you like and dislike about presentations you attend. Keep that list somewhere in view.
I’ve seen him talk about his preparation process before, but never this succinctly. His approach to using a document to outline “Narration” alongside “Images” is a great way to separate what you say from what your viewers see.
As Dan puts it, what your audience sees should illustrate your point, not restate it.
It’s no mistake that presentations prepared in this way, with a focus on the content first as text, with images prepared as illustrations, adapt well to being presented accessibly on the web.
In this article, I’ll be building an integrated todo list component from the ground up. But what you learn doesn’t have to apply just to todo lists — we’re really exploring how to make the basic creation and deletion of content inclusive.
Great walk-through and explanation. I especially like the black and white aesthetic of the example.
I’ve been working my way through this book and its concepts recently, along with some help from one of our fab instructional coaches at work.
Wiggins and McTighe outline an approach to unit planning in the book which seems to be colloquially referred to as “backward design”. Check out the first two paragraphs of the first chapter:
Teachers are designers. An essential act of our profession is the crafting of curriculum and learning experiences to meet specified purposes. We are also designers of assessments to diagnose student needs to guide our teaching and to enable us, our students, and others (parents and administrators) to determine whether we have achieved our goals.
Like people in other design professions, such as architecture, engineering, or graphic arts, designers in education must be mindful of their audiences. Professionals in these fields are strongly client-centered. The effectiveness of their designs corresponds to whether they have accomplished explicit goals for specific end-users. Clearly, students are our primary clients, given that the effectiveness of curriculum, assessment, and instructional designs is ultimately determined by their achievement of desired learnings. We can think of our designs, then, as software. Our courseware is designed to make learning more effective, just as computer software is intended to make its users more productive.
Selfishly, it’s exciting to read a book which draws an explicit connection between my field of work–design and development–and my chosen path–teaching. There’s an incredible amount of overlap, and I’m grateful to have a set of ideas about approaching this work which may bridge some of the gaps between the fields I’ve been so involved in.
Somewhat embarrassingly, I can say with full confidence that reading this book–along with some other experiences I’ve had recently–has given me the chance to confront how much further I have to go with my craft.
It was two years ago that I set aside my daily CSS wrangling and shifted into teaching full time. For many of my friends who have gone into K–12 teaching, I’ve come to see two years as a very short time to develop as a teacher. It has felt like a long journey for me so far, but I need to remember that this is still just the beginning.
Really looking forward to this series on for-profit education, credentials, and access from Tressie McMillan Cottom:
We’ll begin with a discussion of higher education expansion, credentialism, why I prefer credentialing theory to explain for-profit highered expansion, and eventually wind our way through legitimacy theory, new economy literature and finally a working bibliography.
Feel free to join in through the comments (I’ll open them back up for this occasion; don’t make me regret that) and on Twitter using #LowerEd.
I’ve come to believe that even this simple question—“who designed this?”—rests on a flawed assumption. The broad thing we call “the math curriculum” isn’t really “designed.” Rather, like all educational institutions and systems, it is shaped by a hailstorm of competing forces…
I think this isn’t a challenge just in education, but in the field of design at large. The idea that things can be well-planned but flawed in execution causes so many issues. Every design has to live in the real world, or it isn’t design. And every real world thing will include elements that have been designed for in advance, but will have features and realities thrust at it that expose where the original design didn’t account for something.
Design can’t account for everything. Do the best you can within the constraints you’re able to identify. Then make changes if it isn’t working.
The point is not that the humanities, or the liberal arts, or the deeper concepts and values of civilization, or whatever only have value because of how they support more narrowly-remunerative skills. The point is that these deeper values and these monetizable skills exist in relationships so deeply intertwined that they are permanently inextricable from one another. […] I have no doubt that we will come in time to learn again the absolute necessity of learning that goes beyond the rote skills we currently perceive to be important, that someday people will learn to again see the utter necessity of humanistic thinking. But such understanding will only come after we have allowed deluded privateers to wring every last dollar out of our educational system as they strip it of all learning that has a function other than training more efficient little capitalists.
I applied to film school out of high school and spent a large fraction of my university math education reading screenplays and writing about movies. The coffin eventually closed on those aspirations, but my interest in narrative and storytelling has permeated every aspect of my teaching, research, and current work in education technology.
I personally agree about the value of liberal arts education, but I have to wonder if there’s a role that privilege plays in this point of view. I also wonder if the division between technical “job skills” and humanistic education is a false one?
Audrey Watters defends her turf in an excellent talk on the role of criticism. A few choice bits:
Indeed, the computer is a medium of human expression, its development and its use a reflection of human culture; the computer is also a tool with a particular history, and although not circumscribed by its past, the computer is not entirely free of it either. I think we recognize history, legacy, systems in literary and social criticism; funny, folks get pretty irate when I point those out about ed-tech.
And:
It’s an odd response to my work, but a common one too, that criticism does not enable or effect change. (I suppose it does not fall into the business school model of “disruptive innovation.”) Or rather, that criticism stands as an indulgent, intellectual, purely academic pursuit—as though criticism involves theory but not action. Or if there is action, criticism implies “tearing down”; it has this negative connotation. Ed-tech entrepreneurs, to the contrary, actually “build things.”
Here’s another distinction I’ve heard: that criticism (in the form of writing an essay) is “just words” but writing software is “actually doing something.” Again, such a contrast reveals much about the role of intellectual activity that some see in “coding.”
And:
If we believe in “coding to learn” then what does it mean if we see “code” as distinct from or as absent of criticism? And here I don’t simply mean that a criticism-free code is stripped of knowledge, context, and politics; I mean that that framework in some ways conceptualizes code as the opposite of thinking deeply or thinking critically—that is, coding as (only) programmatic, mechanical, inflexible, rules-based. What are the implications of that in schools?
And finally:
Computer criticism can—and must—be about analysis and action. Critical thinking must work alongside critical pedagogical and technological practices. “Coding to learn” if you want to start there; or more simply, “learn by making.” But then too: making to reflect; making to think critically; making to engage with the world; it is from there, and only there, that we can get to making and coding to change the world.
Is it just me, or does the tech industry sometimes seem obsessed with building a “feedback culture” at the office where everyone is encouraged to adopt a “growth mindset” meanwhile whenever honest well-intentioned criticism comes along we plug our fingers in our ears and sing “tra-la-la”?
The true innovation of the sharing economy—or maybe it’s the startup economy, or entrepreneurship (or maybe just…capitalism)—is in the continued refinement of the perception of value, not necessarily in offering new services and developing new products, but in making them available for cheaper, because, as it turns out, when you don’t pay anyone a salary or give them benefits because they’re all subcontractors, and you don’t actually have to invest in any of the infrastructure upon which your business model depends, either directly or by paying taxes, your costs are a lot lower than everyone else.
John Allsop reflects on A Dao of Web Design, originally published 15 years ago, and the web today:
Perhaps those advocating this position, that progressive enhancement is old fashioned and quaint, that the Web is dead or dying because native apps are better, are right. Perhaps the idea of an application is the apotheosis of the very idea of human computer integration, and the Web, in falling short, well, in being different, is an evolutionary dead end.
But I continue to believe, just as the Web is not print, though it emerged in many ways from the medium of print, it is not just another application platform. It has its own genius, which we could call as I did all those years ago, adaptability.
This is a position that I will have a very hard time letting go of, but one that I find increasingly difficult to uphold with beginners or those who “weren’t around” at the time—and even, I think, people who were but never fully bought into the idea of “progressive enhancement”. The problem is still, I think, one of articulation. We need better explanations for what the web is, why it is important, and why anyone should care.
That, or, we simply need to step aside and let the upcoming generation come to their own conclusions in this regard, perhaps with better and more robust solutions than came before.
Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, and Shriram Krishnamurthi:
Acquiring the mechanical skills of programming—learning how to write instructions or expressions that the computer understands, getting to know what functions are available in the libraries, and similar activities—aren’t helping you much with real programming. To make such claims is like saying that a 10-year old who knows how to dribble can play on a professional soccer (football) team. It is also like claiming that memorizing a thousand words from the dictionary and a few rules from a grammar book teaches you a foreign language.
Programming is far more than the mechanics of language acquisition. It is about reading problem statements, extracting the important concepts. It is about figuring out what is really wanted. It is about exploring examples to strengthen your intuitive understanding of the problem. It is about organizing knowledge and it is about knowing what you don’t know yet. It is about filling those last few gaps. It is about making sure that you know how and why your code works, and that you and your readers will do so in the future. In short, it is really about solving problems systematically.
Why all these tools? I see two related reasons: emulating native, and the fact that people with a server-side background coming to JavaScript development take existing tools because they do not have the training to recognise their drawbacks. Thus, the average web site has become way overtooled, which exacts a price when it comes to speed.
That’s the problem. Remove the tools, and we’ll recover speed.
I think the fear part is genuine, and I feel it myself.
I’ll confess to having absolutely no clue as to what is going on anymore with front-end web development, despite it being my first job, and a subject I’ve taught for years. It would be bad enough if the problem was just lack of experience, but what I all-too-often observe is a sneering attitude about perfectly common-sense ideas like “progressive enhancement.” And that includes web development instructors I’ve met.
Lately I’ve been trying to just ignore the whole thing. (Which isn’t really working.)
Rather than building devices that could enhance human memory and human knowledge for each individual, education technology has focused instead on devices that standardize the delivery of curriculum, that run students through various exercises and assessments, and that provide behavioral reinforcement.
We’ve got a lot of work to do, and I always think that the first task is to start to win over hearts and minds. And for that purpose, I’m glad Audrey Watters is around.
Audrey quotes at length from Vannevar Bush’s 1945 piece As We May Think and mentions Doug Engelbart’s 1962 report Augmenting Human Intellect: A Conceptual Framework–both of which should probably be required reading for anyone looking to contribute meaningfully to the role of computers in our lives. Bush’s piece is the more accessible of the two, by far, but Engelbart’s is worth a skim at least.
2) I will not read my paper line by line in a monotone without looking at the audience. I needn’t necessarily abide by some entertainment imperative, with jokes, anecdotes or flashy slides, but I will strive to maintain a certain compassion toward my captive audience.
Man, I thought tech conferences were awful. I’m pretty grateful, at the very least, that I haven’t actually felt obligated to attend any. I can’t imagine what it would be like to have to attend academic conferences. Sounds dreadful.
Ben Orlin on questions as a non-renewable resource:
Questions were not just things to answer; they were things to think about. Things to learn from. Giving the answer too quickly cut short the thinking and undermined the learning.
Good questions, in short, are a resource.
Solving a math problem means unfolding a mystery, enjoying the pleasure of discovery. But in every geometry lesson that year, I blundered along and blurted out the secret. With a few sentences, I’d manage to ruin the puzzle, ending the feast before it began, as definitively as if I’d spat in my students’ soup.
Math is a story, and I was giving my kids spoilers.
As a woman, I like to see as many women as far up on mastheads as possible and then to do something with the information I’ve gathered.
Proud to see Midnight Breakfast on this list, alongside so many other great publications.
Kinda wish she’d mentioned Rebecca (and the others) by name, but whatever. (For that matter, I also wish that the VIDA Count folks—awesome as they are—would make their data more open and accessible.)
In the absence of community coordination, methodless enthusiasm will ensue—and caught somewhere in the Bermuda triangle of competing standards bodies, implementers, and OSS maintainers is the developer community. If we want our community-driven projects to become official, internationally recognized standards, we need to understand the impact of our governance processes as well as we understand the technical specifications for our technologies.
Even though I’ve been sitting/standing around defending the open and standards-based approach to governance over the web this week, I sometimes just take a step back and go “Ugh.”
Tim Parks goes into greater historical depth than I’ve seen elsewhere regarding the rising tides of available reading material (and attendant sense of overwhelm along with accompanying apocalyptic commentary).
If you’ve already got too much to read, skip to the end:
How to respond, then, to this now permanent condition of overproduction? With cheerful skepticism. With gratitude for those rare occasions when we come across a book that speaks to us personally. With forgiveness for those critics and publishers who induce us to waste our time with some literary flavor of the day. Absolutely without indignation, since none of this is anyone’s particular “fault.” Above all with a sense of wonder and curiosity at the general and implacable human determination (mine included) to fill endless space with dubious mental material when life is short and there are so many other things to be done.
And yet, although medieval Europeans had figured out how to build the same kinds of complex automata that people in other places had been designing and constructing for centuries, they did not stop believing in preternatural causes. They merely added ‘mechanical’ to the list of possible explanations.
The cane soon became a source of self-consciousness. “My eyeglasses would get compliments,” she told me, “but my cane would get a funny tilt of the head from people, as if they were thinking, ‘What’s wrong with you?’ ” For months, she was despondent. One thing that helped her recovery was finding a purple cane, while browsing online, to replace her drab, hospital-issued one. “I went from walking hunched down, wanting to hide, to actually being proud of it,” she said. Sometime afterward, she was shopping at J.Crew, her favorite store, and it occurred to her that her cane would look beautiful with the brand’s Kelly-green T-shirts. That led her to begin asking J.Crew, through e-mails, blog posts, and open letters published on Facebook and Twitter, if it would sell a fashionable cane—to broaden its customer reach and to help ease the stigma attached to assistive devices.
I’m still getting used to Apple as a fashion brand. But the idea of intersecting fashion, design, and technology with accessibility (beyond the usual software-level accessibility)? Fascinating.
Comfort noise is a fake hiss that your mobile phone, your VoIP phone, your corporate digital phone system, whatever, creates to mask the silences between talkspurts. That hiss isn’t actually coming down the line, from some analogue amplifier and hundreds of kilometres of copper; it’s created independently at each end by kindly computers.
Aditya Mukerjee digs into the problems inherent in having our digital representations of language governed by an elite group whose dues are expensive, comprised of predominantly white, predominantly male, predominantly American and Western European members.
Gayatri Chakravorty Spivak has written, ‘The subaltern cannot speak’. They are structurally prohibited from having any dialogue – even an unbalanced one – with the very powers that oppress them. Access to digital tools that respect our languages is crucial to communicating in the Internet age. The power to control the written word is the ability both to amplify voices and to silence them. Anyone with this power must wield it with caution.
The makeup of the Unicode Consortium is available to the public, so there is at least some transparency there. In some ways, this strikes me as closer to how French or German are governed as languages.
I generally imagine that I prefer that there isn’t really a “standard” English, allowing for a flexibility in “correctness” which I feel is valuable to the me as a speaker and as a writer.
Of course, there really is a standard, it’s just harder to pin down because there’s no one organization or website devoted to it. But it’s still there, and structurally just as problematic as the Unicode Consortium, but without any of the transparency. David Foster Wallace captured it in Tense Present:
I don’t know whether anybody’s told you this or not, but when you’re in a college English class you’re basically studying a foreign dialect. This dialect is called ‘Standard Written English. … From talking with you and reading your essays, I’ve concluded that your own primary dialect is [one of three variants of SBE common to our region]. Now, let me spell something out in my official Teacher-voice: The SBE you’re fluent in is different from SWE in all kinds of important ways. Some of these differences are grammatical — for example, double negatives are OK in Standard Black English but not in SWE, and SBE and SWE conjugate certain verbs in totally different ways. Other differences have more to do with style — for instance, Standard Written English tends to use a lot more subordinate clauses in the early parts of sentences, and it sets off most of these early subordinates with commas, and, under SWE rules, writing that doesn’t do this is “choppy.” There are tons of differences like that. How much of this stuff do you already know?
…
I’m respecting you enough here to give you what I believe is the straight truth. In this country, SWE is perceived as the dialect of education and intelligence and power and prestige, and anybody of any race, ethnicity, religion, or gender who wants to succeed in American culture has got to be able to use SWE. This is How It Is. You can be glad about it or sad about it or deeply pissed off. You can believe it’s racist and unjust and decide right here and now to spend every waking minute of your adult life arguing against it, and maybe you should, but I’ll tell you something: If you ever want those arguments to get listened to and taken seriously, you’re going to have to communicate them in SWE, because SWE is the dialect our country uses to talk to itself. African Americans who’ve become successful and important in U.S. culture know this; that’s why King’s and X’s and Jackson’s speeches are in SWE, and why Morrison’s and Angelou’s and Baldwin’s and Wideman’s and West’s books are full of totally ass-kicking SWE, and why black judges and politicians and journalists and doctors and teachers communicate professionally in SWE. Some of these people grew up in homes and communities where SWE was the native dialect, and these black people had it much easier in school, but the ones who didn’t grow up with SWE realized at some point that they had to learn it and become able to write in it, and so they did. And [INSERT NAME HERE], you’re going to learn to use it, too, because I am going to make you.