That Photo

Can a single photo still change our world?

Ranjan here, and this post will be kind of intense. I'll be talking about attention spans in the context of a horrifying photo from earlier this year.

You probably encountered that photo this summer. Maybe it was shared on a social feed or maybe you saw it on the front page of the print NYT. As with most things information, I first encountered it on Twitter. I think it was people tweeting about how they would not share it, which of course made me want to find it.

And then I found it.

What happened, according to The Guardian:

According to Julia Le Duc, a reporter for La Jornada, Martínez Ramírez had arrived in Matamoros on Sunday, hoping to request asylum from US authorities with his wife, Vanessa Ávalos, and their daughter. But when he realized that it could be weeks before they were even able to start the asylum process, Martínez decided they should swim across, said Le Duc, who witnessed Ávalos give her account to the police.

“He crossed first with the little girl and he left her on the American side. Then he turned back to get his wife, but the girl went into the water after him. When he went to save her, the current took them both,” Le Duc told the Guardian.

Just look at that picture. Her arm across her father’s neck. He draped her in his tshirt to try to carry her. Her diaper filled up with water. She's still little enough to wear a diaper, but old enough that her shoes look like they've been walked in. So perfectly two years old. From a different angle it looks like he has dark blue adidas gym shorts. I think I have the same pair. It looks like there is a beer can next to her. That little girl's arm is around her Dad. Her little brown arm is almost blue-ish. And they're both dead. He went back for her, and they both drowned. Together. With her arm around his neck and him carrying her.

I'll stop now and try to make it through this piece.

TRYING TO STAY FOCUSED

My daughter was exactly 2 1/2. I stared and stared. I didn't quite cry, but it was a weird emotion. I remember wanting to not look away. To not keep scrolling and to make this count, at least for me. Somehow. I didn't know what I would do and wasn't planning on writing a letter to my congressman or something. I just wanted to not diminish what I was looking at, and to switch my attention would do so. I instantly thought about that little Syrian boy who washed up on the beach and the famous Napalm Girl image. Should I look them up? No, I'm going to try to stay here.

I remember all this vividly because there was this weird internal, self-aware push and pull. I was desperate for the country and world to collectively take in this horrifying photo and make it mean something. Before we started arguing whether it was fake, or tried to find salacious details of the father's past, or question whether the family should've even been on the journey, or debate the impact of Trumpian immigration policies, we would all just stare and try to process how soul-crushingly awful the thought of a parent and child drowning together, wrapped up like that really is.

The least I could do, my tiny part, was to not look elsewhere or think about something else. And just stare and think.

I remember it being really hard.

SCROLL OR ENGAGE

I've done a fair amount of reading on attention spans and media consumption. My attention span isn't quite that of a goldfish, yet. I can still read books and watch full movies, and I do write these long newsletters, so I'm still trying to fight the good attention fight. But just staring at a digital image is hard for me.

I'm hesitant to media-intellectualize my experience with the photo, but I think it's important. As I stared at that photo, I wanted to just keep scrolling. My thumb was fighting my heart.

The medium gave me two choices, keep scrolling or engage: Like it, share it, or comment on it (‘engage’ here is not necessarily a positive thing). Facebook's $600bn market cap depends on their best-in-class data, product and engineering talent making sure you feel those choices viscerally. Do not linger. Keep scrolling or engage. The platforms are great at it.

So I went to the NYTimes home page, and clicked on the article. This was much better. There were no Forbes.com-ian autoplay video ads popping up as I stared. Just a photo on a white canvas. It makes sense as the NYT business model incorporates time-on-page as a commercial metric. That's a good thing. But the depressing part is, my mind still had trouble staying there. I wanted to go find out "what people were saying" and google more about who this father and daughter were. I tried reflecting, yet my eyes darted to the 20 other browser tabs I had open.

I finally googled "Napalm Girl" and wondered could a single, powerful, horrible photo still create widespread societal change? I searched for "Syrian toddler on beach.” Did that photo end up changing the course of the Syrian refugee crisis (this is a decent piece arguing that it did)?

MESSAGES, MEDIUMS

I wonder (and worry) about the way we experience photography. There's a palpable difference in looking at photography in one of those Top 25 News Photos of 2019 from The Atlantic or The Best Photos of 2019 from National Geographic versus scrolling through Instagram. Each human-curated photo feels like it has some greater worth, but there are still so many, you spend less time with each individual one.

Margins' media-wonk readers might be familiar with the famous Marshall McLuhan idea of The Medium is the Message: the nature of a medium (the channel through which a message is transmitted) is more important than the meaning or content of the message.

I firmly buy into this. The scary part is the dominant places where we are now introduced to photography are those social feeds where you scroll past or engage. We are trained to engage in metrics warfare rather than quiet reflection.

The NYTimes decision to publish the Oscar and Valeria image on their print front page was a controversial one. The editorial staff wrote an entire piece explaining their choice and one idea really stood out:

There are some places the photo hasn’t appeared: The Times has a longstanding policy of not using graphic images in social media posts, except in extremely rare circumstances.

“It’s one thing to feature graphic photos on the homescreen or in an article,” Cynthia Collins, our off-platform editor, said. “It’s quite another thing to serve a graphic image in tweets and Facebook posts that can appear in the newsfeeds of people who didn’t deliberately seek out the news and editorial judgment of The New York Times.”

It’s a smart insight. The medium matters (I also love that the NYT has an ‘off-platform editor’).

RESOLUTIONS

I can’t read a full book on the Kindle app for iPad because, as I’m reading, I inevitably switch to Safari and look stuff up related to what I’m reading. Sure, this can lead to some interesting internet rabbit holes and learnings, but it usually results in not finishing the book. If I read on a physical Kindle, I have nowhere else to go. If a passage strikes me, thanks to the device’s wonderfully shitty connectivity, I can only just stop and think.

Back in June, I told myself I would not forget that photo. I had actually made a mental note to just spend some time with it towards the end of the year, and thanks to writing this newsletter, can make good on that promise. Maybe if I didn’t have a little girl around the same age, it wouldn’t have hit me as hard, but it’s still astonishing to me that photo didn’t have every American completely stop in their tracks. I don’t know what would’ve been the desirable policy outcome, but I’m not even going that far.

I’m not a big New Year’s resolutions person, but if there’s one thing for 2020 I want to do more of, it’s just stop and reflect. Not share. Not comment. Just silently try to think.

And then maybe a few months later write a newsletter about it.


Note #1: A big thank you to the large number of readers who responded to my newsletter on Innovation in the 2010s. I’ll be following up on this topic in next week’s edition and including some of your examples.

In defense of biometrics

Say no to passwords.

Hi! Can here again. Today, we talk about biometrics.

One of the many joys of moving to the US is discovering the madness that is Social Security Numbers. My first experience with them was defined by my lack of one. Around 15 years ago, I was at a Cingular store in Squirrel Hill in Pittsburgh, trying to get a phone number. The first person trying to help me was new, so he didn’t know how to proceed. Luckily, another clerk showed up and, having dealt with foreign students before, did some magic, and I was then the owner of my first (and still the same) American phone number. I didn’t like my phone one bit, and it was pretty eventless.

It would only become clear to me a couple years later what that guy did when I tried to get an iPhone —the first one, mind you— at the Apple Store in Shadyside. Apple’s then-brand-new (and quite buggy) activation system on iTunes asked me for my SSN, and I blanked out. I didn’t have one. Somewhat unsurprisingly, the Apple Store guy (Genius?) asked me if I was a foreign student, I said yes, and then he entered 999-99-9999 as my SSN. And then, there I was, the proud owner of a brand new iPhone. My new internet communications device brought me many joys, and my ownership of it was pretty eventful.

I bring Social Security Numbers up because they are stupid. Of course, now that I am a real person who makes money and stuff, I have a real one, but I do not like the concept one bit. There’s nothing to love about it. They are sort-of-random, but only as-of recently. You are supposed to keep it a secret, except every other form you have to fill out as an adult requires it. You aren’t supposed to share them with anyone, but customer service agents will casually ask you for it (my employer has solutions for this) and harass you when you refuse to repeat it on the phone. If someone asked you to design a database schema where the unique ID is the same as the password, you’d be fired in an instant, but here we are! Social Security Numbers, as they are currently used in many systems, are both usernames and passwords. It’s just stupid throughout.

hunter2

But, OK, you know what else is stupid? Passwords. They are. I hate passwords with a passion. I hope they go to hell. Of course, I use a password manager because it’s the right thing to do, but I am also glad it’s called 1Password because that’s the number of passwords I can bother having in my life. I also have a couple other passwords, or rather pass phrases really, for a couple computers I own, but that’s about it. None of you should be remembering more than a few passphrases.

There are so many things wrong with passwords. For starters, many of you use, somewhat ironically, again, but from the other side this time, a single password for all the services you use. “Password re-use” is the industry term here, and it’s so rampant that it’s pretty likely that if you’ve been using re-using a password for a while, it’s probably out in the clear in many of the hacked databases already. I know mine is. In fact, you can even read about my password on TechCrunch.

But that’s just the biggest problem, and there are so many. Another is that everyone, at some point, misuses passwords to the point where they don’t even protect anything. Often, they are so weakly encrypted that an underpowered laptop is all you need to crack them. Other times, people don’t even bother encrypting them, so you end up with the leaks mentioned above. I wrote about these before.

And there are human problems too. Passwords are supposed to be things you only know, but people are generally bad at remembering things that they don’t use often. So you have to design your service with “forgot password” functionality in mind. The so-called account recovery flows are not just painful to implement, they are generally hard to get right, often becoming one of the first attack vectors. The easier you make it for people to recover their accounts in the case of lost passwords, the more gaping holes you open in your system. My co-author Ranjan pointed out that this is, in fact, how John Podesta got hacked, and we all know how that turned out. It’s not that we are at the risk of liberal world order collapsing because of credential management, but passwords certainly didn’t help.

Look, I can sit here and tell you hundreds of other ways passwords are bad. You know what fixes them? Not having them in the first place! Yet, you obviously need something to replace them with, in order to, as we say in the industry, “authenticate” yourself. You need something that is unique to you, that you and only you have access to, that you can’t easily misplace, forget, lose, or generally be without. 

And I know you won’t like this, but replacing them with your biometrics is the end solution here. Your fingerprints, or your face, or retinal patterns; pick anything. They are all better than your passwords. This, however, is a touchy subject. Part of our goal on Margins, both for Ranjan and me, is to explore these ideas out in public and see what you all think. 

A common reprise among the engineer types against using biometrics is that you can’t “rotate” them, which is an odd way to say “change.” The notion flows from what is a common security practice for managing electronic keys (which are really passwords between computers, that aren’t meant to be read but still kept secret). In order to shrink the time window, you might be open to an attack by an exposed key, server admins rotate or change their passwords periodically. There are arguments against this, as people often screw this up and actually end up exposing themselves, but it remains a generally accepted good practice.

Of course, biometrics do have this problem of being tied to your bio, and hence being impossible to rotate. Yet, I think this is not a huge problem for a couple reasons. First of all, there’s the philosophical argument that this is a feature, but not a bug; that your body is generally (with some notable exceptions) unique and consistent across time. Of course, people age, and you can alter some of your measures in some painful (or not?) ways, but specific biometrics stay the same. 

But more importantly, the big reason to rotate keys is that it’s actually pretty easy to reproduce a computer key, which is just a piece of text. Biometrics, if done right (again, a big if), are much harder to reproduce. You can, for example, steal someone’s digital representation of their retinal scan, but currently, we do not have the technology to reproduce a retina without the pesky human thing it’s attached to. Moreover, even if you could, it would still take a lot more effort than, say, entering something on a text field or copy-pasting a computer key. People made fun of Apple when it first launched TouchID, but remember it was replacing a system where you entered 4 digits on a flashing screen in public. The threat models make a difference.

Problem isn’t the Technology

A more serious problem with biometrics is more political than technical. As technology improves, so does biometrics technology. When I had my first computer, detecting a face on an image seemed like an insurmountable problem. By the time I was in college, not just detecting but identifying someone on a well-lit picture was a sophomore-level homework project. Today, a decade or so later, you can download a library from GitHub to do all that for free and run it on a $5 chip tied to a $10 camera. 

The darker implications are pretty obvious, and we are seeing some of it unravel in real-time as we speak. Just recently, China announced that you’ll have to get your face scanned just to get a cell phone number—no pesky SSN required! And that’s neither the beginning nor the end of it; biometrics do not just identify you, but they can also correctly identify many characteristics such as race and gender. Things can and do go wrong here often, and that’s assuming the technology works as advertised, which is often not the case.

So, why am I not extremely worried? I am. But I also think that we can solve some of these problems. The solutions will surely involve technology, but I think the real trick will be the political will required to deploy these technologies responsibly.

As a Turkish immigrant to the US who travels internationally often, I have given my fingerprints and facial scans, and god knows what other data to various countries’ immigration departments many times. Just a couple weeks ago, I entered the US using Global Entry without ever removing my passport or Green Card from my pocket, just by scanning my fingerprints. Out there in the developed world, Europeans had these automated passport gates for years. When I lived in Singapore, I just had to press my thumb to enter the country.

I know this proliferation of biometrics-based identification bothers many people, and they think it sets a bad precedent. At some level, I agree. In the current political climate we are living in the US, some of what I say doesn’t immediately sit well with me. Yet, on the other hand, I do recognize that we are living in a society that is governed by rules that are imposed by entities that have monopolies in violence. I do not like Trump, nor Erdogan, but I remain a law-abiding citizen of Turkey and a legal permanent resident of the United States.

The point I am making is not one of boot-licking or rescinding my individual liberties. Rather, I acknowledge that we are (most of our readers, at least) living in rules-based societies. We do, can, and should build technologies that allow people to live their lives more securely, easily, and while keeping them safe from oppression. The idea that a password is what would keep the G-men away from me is an odd one.

In this case, for me, the idea of having my identity, my data, my being locked away behind some poorly implemented technologies, like passwords, does seem like a worse deal than using my biometrics on my phone and my laptop. I do trust, maybe misguidedly, that a client-side identity verification using my face, is a better option. I type this on a laptop that I unlock with my fingerprint and come tomorrow, I’ll be reading your replies on my phone that scans my face. My bank account is protected by the password manager that you can unlock with either. This seems, to me, the right direction.

Innovation and the 2010s

Reflecting on a decade of technological advances

Ranjan here, and today I’m getting nostalgic.

Yesterday, I spent the afternoon at a working group of media innovation people. We discussed how their companies experiment and operationalize emerging technologies, which led to a lot of discussion about what the future could look like. Yes, sometimes my content strategy gig leads to some fun stuff.

Someone put up a slide about how their company is thinking about the world in 2030. It had the word "CONVERGENCE" in huge letters, along with pictures of people with robotic exoskeletons, talking about a tipping point where AI, neural interfaces, quantum computing, and IoT will all come together to fundamentally change how we experience the world. Masa Son would be proud.

I thought back to the beginning of the decade and something hit me. In 2019, sitting in front of me was a 2018 Macbook Air that was charging an iPhone X.

In early 2010 I had started business school at INSEAD in Singapore. And on my table:

That's a MacBook Pro charging an iPhone 3GS. The cable is different, the computer is a bit smaller, the phone is bit clunkier, but it all looks kind of the same. And I started looking around me. Everything kind of looked exactly as it had a decade ago.

Did we experience real innovation in the 2010s? If you were to transport someone from 10 years ago to today, what would blow their mind?

TIME TRAVELING

2010 Ranjan - welcome to the end of the decade.

First thing to note - I found that picture above just by searching "computer" and "2010" on Google Photos. That's pretty magical. But is that really world-changing? This felt like what would be reasonable for 10 years of search improvement. What about today’s tech is so profound that it would be hard for 2010 me to comprehend it?

I kept thinking of this as I walked home. Everything around me kind of looked the same as it did a decade ago. The jackets people were wearing, the bags they carried, the hats they wore.

Certain things felt futuristic like that guy zooming by on an electric skateboard. Otherwise, everything looked like NYC a decade ago. There weren't holographic projections or robots walking alongside us.

When I walked into my apartment, 2010 Ranjan had his first real "holy shit" moment. I have all the lights in my place wired to Alexa and we voice command everything. Saying "Alexa, turn on the living room" and seeing a faint blue ring on a weird plastic black tube light up, and magically all the lights in my living room turning on, starting to feel like like the future we were promised.

TAKE A LOOK BACK

I'm a big fan of new years and new decades, but it's less about making resolutions and more about looking back. I'm writing this post because I'm curious what Margins' readers think about the technological innovation of the 2010s. Has your mind been blown? Is it what you expected?

We keep hearing about the onset of exponential innovation, yet everything feels incremental at best.

I spend a lot of time reading a lot about cutting-edge technologies and there is a lot out there that advertises how rapidly accelerating the pace of technological progress is becoming. Wait But Why's Neuralink piece is the one which really sold me that we're a few years from cyborg-dom. But then when I think back to what the year 2010 looked like, it's barely technologically discernible from today, at least visually.

I still use Gmail. It looks and acts pretty much the same. They introduced autocomplete which I guess could eventually be mind-blowing, but isn't quite there. I use YouTube....and it looks almost exactly the same (another picture from 2010):

I was using Spotify and while the algorithm has probably improved and they now have podcasts, the overall functionality feels unchanged. I'm one of those people that deleted Facebook and Instagram, but from what I see, they operate pretty much the same. I guess Stories are innovative, but we're still a long way from all spending time in weird VR communities where we walk around without legs.

And don't get me wrong, I'm hugely bullish on all VR and AR, but that stuff is a long way from my day to day.

ARBITRAGE, NOT INNOVATION

I genuinely spent the past 24 hours obsessing over this question and just looking around me. When a huge Amazon box showed up with a diapers and formula (#DadLife) I remembered how ecommerce has reshaped the way we live. But this is something that still feels incremental. 1 week changed to 2 days changed to 1 day. That’s still pretty cool, but in the past few weeks, I've wondered what the "true innovation" of Amazon's delivery prowess is. I, along with most, had long been sold on a vision of whirring warehouse robots, of which there are certainly many. But a lot of great recent journalism has shown that a big part of the "innovation" is just a greater appetite to work people to death.

Ride-hailing is another thing that might surprise a 2010 time traveler. After decades of raising your arm to hail a cab, you can now just tap a button on phone and see your driver on a map. It's also reshaped the way we live. And it's also another one where I question whether the true innovation was around regulatory and labor arbitrage. Also, in Singapore you could hail a taxi via SMS back in 2010 and it would show up to your door, so perhaps this doesn’t make the cut.

This kept bringing me back to my co-host Can’s very popular post about most innovation really being business model innovation (told through the lens of a very insider-y Twitter joke construct). If the engineer-half of the Margins is saying that……

POSITIVE. NEGATIVE.

The positive outlook: Maybe the past decade’s advancements have all happened in the background. I started this meandering post by noting how insane it was that Google Photos could find my photo of a computer from 2010. Also, my iPhone looks generally the same, but the photos and video it takes now are incomparable to 2010, and that's all under-the-hood compute and hardware.

I noted that voice assistants are mind-blowing, and I still marvel at how they recognize and process what I tell them. But we can't really "see" them. So maybe the past decade was about building the machine intelligence that will unleash a flurry of visible, day-to-day technologies that will make 2030 look like Bladerunner. Maybe I can finally start bio-chipping and won't be thought of as weird (I think about this a lot).

The negative take: regular readers know how I feel about the Big Tech oligopoly. Is there an argument to be made that they choked off real innovation over the past decade. They allowed anything that would directly help their businesses, but bought and killed the things that could've changed our lives. If Instagram didn't kneecap Snap by copying Stories, could we all be wearing Spectacles right now and AR would’ve become a real thing? We’re still using the same exact apps and they kind of look and feel the same.

Could the way we social network (as a verb) have been something more than just scrolling through a visual feed and tapping Like? Voice search is starting to change the way we search, but otherwise, seeking out internet information looks pretty much the same. Though, I guess Google Snippets could be the innovation? And LinkedIn! Damn you LinkedIn, just improve a little bit! I still believe in you!

I also acknowledge that maybe we don’t want things to look too different. Facial recognition tech is already here, but I’m glad we don’t have big, futuristic cameras stuck to every lamp post. Would I really want to look out my window and have a bunch of drones hovering nearby?

GETTING DEEP TECH

One of the things that makes writing Margins posts is that, for both Can and me, technology is a lifelong obsession. But it's not a blind love. It's a mindful one. So I'll leave our readers with the question: Do you think we lived up to our tech potential over the past decade? Will the next decade realize exponential change, especially the kind that you can see around you? And finally, will my kids one day marry robots? Because I really do wonder about this.


In thinking about the tech that still blows my mind, here are a few:

The Peloton Bike: Y’all keep hating on that ad, but I’m still going to say that the entire Peloton experience is still pretty mind-blowing.

Sous Vide: While my love of fried chicken is very public, the former trader in me certainly has had my share of great steaks. This is a surprising one, but I will sometimes set a plastic-wrapped steak into water and start the sous vide from my phone while still at work. And to come home to a steak that, with a quick sear, looks like this, thanks to technology, is still utterly head-spinning for me.

If in the future I have cholesterol problems, the cause will probably be well-documented in a trail of newsletters.

Sidecar (from the new MacOS): This is another random one, but as someone who has to move around a good deal for work, but also prefers working with a secondary monitor, the new iPad <> Mac Sidecar connection has been life-changing. And perhaps that’s the perfect way to end this post, because, looking at that same laptop and iPhone, but adding in an iPad and seeing and windows seamlessly navigate back and forth, with no wires…that’s pretty amazing.

Tech, Meet the Humans

Humans are good now.

Hi, Can here. Today, some gripes.

A common gripe a lot of people have with the technology industry is how dehumanizing its products are. There are some obvious culprits. All those delivery and not-really-ride-sharing apps treat the drivers and the delivery folks as primarily as the human adapter to a digital platform. There are black-box algorithms that help you decide (or just decide) who you are going to sleep with, who you are going to vote for, who you are going to vote for. There are some humans in the loop somewhere, and occasionally they step out of the shadows when things go haywire, but more often than not, they are often hidden, or at least inaccessible to us (you?) mere mortals.

I mean, there are many reasons for this. Some of it is really just a function of the scale these companies operate. At some level, I do understand that when you are Facebook, and you deal with 2.6B people a day, it’s not going to be feasible to give everyone the white-glove treatment (aside from, ehem, the white-gloved Like button), so you generally automate away as much of the pesky human interaction out of the loop as possible. In one sense, Facebook can exist as a social network by ridding the social out of the network. Nothing human, or human-related, exists at a scale of two god damn point six billion humans. Dehumanization at that scale is just human. Look, I am harping on Facebook somewhat habitually here, but you could really apply this to any company that deals with more than some hundred million users.

There are, however, other reasons too. Maybe the tech companies’ products are so dehumanizing because the companies themselves are. There’s a good amount of evidence that suggests that technology companies, especially those in the Bay Area, have had cultural problems somewhat rooted in the rather problematic, male, and generally white male-dominated culture of the technology circles. The nerds are the kings now, and it turns out they weren’t nice people either. If I am being vague here, it’s only because it’d take me the rest of this essay to list out just the list of the toxic incidents I’ve experienced or heard from those experienced it first hand.

But, you know, these things can go the other way. The main ingredient in any tech company is the human capital, a rather dehumanizing way to refer to people. We talked about this a bit before, but as any founder would tell you, the bottleneck for most tech companies, small and large, is how fast they can hire people to keep the blinkenlights blinking even quicker. Small companies are generally at a disadvantage here as most can’t keep up with hiring in a tiny region with rising living costs. But, if you are a monopoly at a time where the government is asleep at the wheel, you’ll be fine. You have the money, so you just hire, hire, and hire some more.

What happens, though, when you hire so many people, and then as predicted by the law of large numbers, you accumulate enough of those people that fervently disagree with your management decisions? Or maybe, what happens that you end up hiring from such a small pool of people in a tiny geographically concentrated region that those people’s politics tend to differ from those of management? I guess you can ask Google. Maybe there’s no such thing as free Google lunch after all?

It’s been utterly fascinating to watch, for the last few months, the workers at companies like Google, Facebook, and many other places to band together and make their voices heard against the management. For many years, the technology industry and its generally-liked poster boys (obviously, boys) carried themselves as utopian workplaces where you could simply continue all your Stanford dorm habits, including the rather unhygienic ones, and the chasm between the worker and the management was rather...small. I mean, if you actually made it inside the walls, you knew that whole techno-socialist vibe was rather…fake, but nonetheless, it did make good b-roll for 60 Minutes segments and HBS case studies, and the workers were happily compensated enough to not make too much noise.

Yet! Somewhere along the way, as the liberal world order has begun to crumble under its own weight in 2016, the lofty compensation packages of the techies have lost their compensating effect. The famously secretive Facebook is leaking like a sieve. Google is canceling its storied TGIF meetings, and hiring anti-union consulting firms.

Do things get direr from here, if you are on the management side, or is this the end? Or, is this even a thing that really matters when you are enjoying record profits per share quarter after quarter? As many protesters as there are outside Google’s offices, there are many thousands more who are happy to look the other way or maybe just agree with the management’s decisions.

The generally centrist, capitalist side of me suggests that these are really the growing pains of a young industry, which is now realizing that you can’t run a hundred-billion dollar company composed of tens of thousands of people like an anything-goes-madhouse. At some point, people’s gotta realize that there’s work to be done at work, and, you know, if you don’t like it, there are other jobs.

But another side of me thinks that these are also growing pains of an industry that has long neglected its impact on those people who have to deal with the externalities of its products. It is probably good that a company faces criticism when it walks out an executive with checkered sexual harassment past with a hero’s goodbye, or when it takes money from lying politicians under the disguise of some Fifth Estate bullshit. Those things are objectively bad things, and it is good that people whose personal and professional lives intertwined with the places they work at are upset at the management for letting them happen.

Sometimes people ask me if why I am so cynical about the technology industry, and I tell them I really am not. I have been fascinated by computers since I’ve touched one, and made this field my career, somewhat to chagrin of my parents (though they’ve come around). I’ve made a good living working in this field, and continue to do so. However, at the end of the day, I recognize this is all a means to an end, which to make my life as well as others lives better. The work I do is for humans, not computers, and I do it primarily through humans, with computers in the loop, not the other way around. Luckily, I do not need a PR-bot to tell me what I am doing is good for the world.

Thanksgiving Week Link Dump #2 of 2 - Food Writing Edition

Ranjan's favorite articles about food

Ranjan here, curating my favorite food writing.

For the second Margins' Thanksgiving Week Link Dump, I figured I'd honor the holiday by pulling the best food-related articles I've devoured over the years (easily found thanks to my favorite reading tools: Instapaper and Readwise).

From Rendang to Peking Ravioli; from Pizza-Hut helping bring down communism to Kashmiri geopolitics and saffron prices; from the economics of the lobster market to British cuisine and globalization; and of course, fried chicken. Enjoy!

The Top Picks

How an outrage over crispy chicken united South-East Asia: Contrary to the title of this BBC piece, it's not about fried chicken, but rather Rendang, the Southeast Asian slow-cooked meat. Great food writing is the kind that every time I cook or eat the meal, conjures up a very specific mental image. Even though my current incarnations of Rendang are made in a very modern Instant Pot using a pre-packaged spice paste, I always think of these "wandering" men:

Gusti Anan, professor of history at the University of Andalas in Sumatra, explains how the Minangkabau tradition of merantau (voluntary migration) resulted in the spread of rendang to neighbouring countries in the Malay Peninsula*.* This tradition of wandering is a version of migration that is unique to the Minangkabau people, which research suggests is connected to its matrilineal tradition where men are considered ‘guests’ in their wives homes and ancestral land is passed to women not men.

Men (and also some women) chose to migrate hoping to gain life experience as well as better financial opportunities. They travelled to places like Malaysia and Singapore on foot or by river, and finding food was often a struggle. Anan said, “To solve this problem, they would bring food from their home… and food that could last a long time in good condition is rendang.” Wrapped in plantain or banana leaves, they carried it with them to sustain them on their journey.

The Story Of Peking Ravioli: I'm a sucker for stories about immigrant adaptation of cuisine. I guess it's because it represents my own assimilative upbringing and my perception of the American Dream.

I never really understood how "regional" the "Chinese Food" I ate growing up in Boston was, until I left for college in Atlanta. Items like Chinese Chicken Fingers or Peking Ravioli didn't exist, at least in name. This Lucky Peach article taught me the history of the very specific cuisine of Boston Chinese Food, and its matriarch, Joyce Chen.

Note #1: I found this in my Instapaper archive, but the original, published in Lucky Peach, has disappeared from the internet. Luckily it was re-published on another food blog).

Note #2: I have a really weird obsession with Chinese Chicken Fingers, which might require an entire post one day. Most friends think they're just sweet and sour chicken without the sauce, but to me they represent the perfect intersection of fried chicken and the immigrant struggle.

To delve into their name, though, is to get a glimpse of Boston’s culinary past. And as with much of the Chinese food in the area— from moo shu and Peking duck to scallion pancakes and the all­-you-can­-eat buffet—it all starts with Joyce Chen and her seminal Mandarin restaurant, Joyce Chen Restaurant, which she opened in Cambridge in 1958.

Gorbachev's Pizza Hut Ad Is His Most Bizarre Legacy: This article came out this week and it's a beauty. I had seen the Gorbachev Pizza Hut ad circulating over the past few years, and this was an incredible geopolitical examination of what it meant for a transition from a bipolar to an unipolar world.

Hot Chili Peppers, War, and Sichuan Cuisine: A wonderful history of how the chili pepper took over the world:

“The food of the true revolutionary is the red pepper,” declared Mao. “And he who cannot endure red peppers is also unable to fight.’ ”

Maine Is Drowning in Lobsters: A fascinating look from Justin Fox at Bloomberg on the economics of the Maine lobster market. It's not just a simple question of supply and demand:

All in all, it's a fascinating tale of adaptation, marketing and lobster logistics. There is one big catch, though, beyond the vague fears that the lobsters can't be this abundant forever. It's that the bait used to lure the lobsters into traps -- herring -- isn't as abundant as they are. Herring stocks along the Maine coast haven't collapsed as some other fisheries have, but the catch has fallen in recent years, to 77 million pounds in 2016 from 103 million in 2014 and more than 150 million some years in the 1950s and 1960s.

The finest Chinese delicacies — duck’s tongue, fish maw and chicken’s feet: I eat everything and anything, a trait that certainly helped gain a baseline acceptance among my wife’s Taiwanese family. But intestines. And duck’s blood. And other offal-y, tripe-y things that look like this, I always had a hard time with. That’s until I read this one FT piece that introduced me to the concept of Mouthfeel, courtesy of the author Fuchsia Dunlop.

It completely changed the way I approach anything chewy and tough, and now I’m the first at the table to toss the tripe into the hotpot:

Probably even more important, however, is the Chinese delight in the textures of food. In Chinese gastronomy, pleasure is derived from the total sensory experience of eating, and “mouthfeel” (kougan) is inseparable from aroma and taste. The Chinese (like the Japanese) have an exquisitely developed appreciation of texture, and enjoy a far greater variety of mouthfeels than those commonly understood in western cuisines.

The World’s Most Expensive Spice Is on the Verge of Disappearing: The first time I cooked with saffron was not in an Indian dish, but rather an attempted Paella.

I guess it doesn't really appear in Bengali food, or maybe it was more economic, and it was simply price-prohibitive for my parents (it coincidentally appeared in my kitchen once my undergrad student loans were paid off). In light of recent political developments in the Kashmir region, this Eater piece feels incredibly relevant.

As the farmers have begun to say, “the red-gold is turning to gray.” Due to ongoing regional violence, droughts, and the still-unfolding effects of climate change on the land, Kashmiri saffron has slowly begun to disappear. “I tried to grow apples here on this land a decade ago,” Mir says. “But they didn’t fruit! This land is meant only for saffron. Without it, it means nothing.”

How globalisation created British cuisine: If I were to read a full-throated intellectualization on the unseen grandeur of British cuisine, I'd certainly want it to be from The Economist:

“People talk about how good British food is in relation to how terrible it used to be,” says Mr Brown as he washes down his pie with a pint of Barnsley Bitter at the Old No 7 pub down the street. “My contention is that it didn’t use to be terrible at all.”

The claim carries a taste of parochialism. But Mr Brown’s argument is built around globalism. His defence is not that a full English bests a croissant (though it obviously does), but that the virtue of Britain’s cuisine lies in the country’s historical openness to the world. The country has long been what David Edgerton, a historian, calls “the hub of an extraordinary gastro-cosmopolitanism”.

Everything we love to eat is a scam: For every fellow bougie food-trend following consumer, just remember that just like our news and our commoditized Amazon purchases, everything we consume now, is fake:

Fraudulence spans from haute cuisine to fast food: A February 2016 report by Inside Edition found that Red Lobster’s lobster bisque contained a non-lobster meat called langostino. In a statement to The Post, Red Lobster maintains that langostino is lobster meat and said that in the wake of the IE report, “We amended the menu description of the lobster bisque to note the multiple kinds of lobster that are contained within.”

Loading more posts…