»Her Code Got Humans on the Moon – And Invented Software Itself« →

In 1965, Hamilton became responsible for the onboard flight software on the Apollo computers. It was an exciting time, and the US was depending on the work that she was doing. But sometimes the pressure kept Hamilton up at night. Once, after a late-night party, she rushed back to the computer lab to correct a piece of code she’d suddenly realized was flawed. “I was always imagining headlines in the newspapers, and they would point back to how it happened, and it would point back to me.”

Robert McMillan | Wired, 13 October 2015

»The Racism Beat« →

We can go on like this, of course, because in America the racist traumas are widespread. How about the next time a black person is stopped and patted down without cause? How do you write about that humiliation in a way that’s different from what you wrote when Forest Whitaker received the same treatment last year, and a New York City police chief before him, and thousands of other innocent black and Latino men before them? What new column shall the writer write when an unarmed black person is killed for doing nothing but frightening an armed white person? The same thing he wrote when Trayvon Martin was killed? And that’s to say nothing of when Oscar Grant was killed. Or when Ramarley Graham was killed. Or when Timothy Stansbury Jr. was killed. Or when Amadou Diallo was killed. Or when Jordan Davis was killed. Or when Ousmane Zongo was killed. Or when Jonathan Ferrell was killed. Or when Renisha McBride was killed.

Cord Jefferson | Medium, 9 June 2014

»Learning How to Learn: The Most Important Developer Skill« →

The problem is that it’s impossible to know everything about anything, so viewing learning as a race leads to burnout and disappointment.

Instead, if you see learning as a process, you’ll appreciate the small victories and insights along the way. This will drive you to constantly move forward.

Preethi Kasireddy | Free Code Camp, 26. August 2016

»Creating machines that understand language is AI’s next big challenge« →

There’s an obvious problem with applying deep learning to language. It’s that words are arbitrary symbols, and as such they are fundamentally different from imagery. Two words can be similar in meaning while containing completely different letters, for instance; and the same word can mean various things in different contexts.

Will Knight | Technology Review, 09. August 2016