Ad-blocking is a common point of contention in developer/tech circles. Some will say it's a necessary evil, some will say it's just plain stealing. But I disagree with these points.

I think ad-blocking is a moral, righteous and good thing to do today, and even dramatic actions like installing AdNauseum are okay. And here's why.

Personal Advertisement Undermines Free Will

What does an effective advertisement do? It gets you to buy something you otherwise wouldn't have. In some cases this is a diversion from another brand (such as from Coke to Pepsi), but these are exceptions.

I understand from the perspective of a business, advertising is a way to "get your product out there". But personalized ads are beyond just putting a product in front of people. Most ads I notice on the web aren't this way. They are trying to sell me things I don't need because these are the things with the highest margins. Sometimes they're ads for more ads (links to sites just loaded with ads and with a very poorly written article).

Culture of Clicks

The web isn't about hits anymore. It's about clicks. Pages and articles are shared by their headlines and not by their content, and websites certainly aren't bookmarked anymore. This is a direct consequence of online advertisement.

This is why you have to click through multiple pages to read a full article on certain sites, this is why SEO is dominated by scummy sites, and it makes clickbait headlines and articles effective. More importantly, it has fueled the "culture of outrage" we have today by making sure that clickable, scandalous headlines get floated up to the top of the page, the feed, and our consciousness.

Free Services Aren't Going Anywhere

Google etc. operate on ad revenue. While it's true that they hold an important place on the internet, that doesn't mean they are irreplacable. If Google's advertisement-driven business model went down the pipes, the actual web wouldn't suffer much. Gmail would go away and people would move to other webmail. Android would go away but AOSP would live on, and so would iOS and Tizen and the many other operating systems.

Let's face it, without advertising, lots of the web would stay around. The combination of cheap hosting and wide bandwidth mean you can run a social network on a $35 Raspberry Pi, or run a decentralized one on your PC. I'll admit that some of this software isn't perfect and probably not the easiest thing to get William Shatner to use, but they work.

Why Your Smart TV Sells Your Data

Somebody asked me why Smart TVs send in telemetry data to "the mothership". And there are really two reasons.

  1. To provide you with "better" advertisements
  2. To make "better" programming

You'll have to forgive my use of scare-qoutes, but neither are really better for consumers. Better ads means "more effective" ads, i.e. "ads that get you to spend more". "Better programming" means shows that get more milquetoast TV shows that don't challenge or offend anybody.

The Bottom Line

Breaking the existing personalized ad system...

  • makes our personal data worth less to the world
  • makes the news better
  • makes TV better
  • breaks the control large media companies have over us
  • improves our privacy don't feel bad about blocking ads.

Maybe someday there will be a wave of advertisements that don't track users anymore, but we're not there, and my ad-blocker is staying on. I do think it's nice to turn it off for small, high-traffic sites that need the revenue, but my default setting is going to be "on".

Google isn't your friend and they probably never were.

I was a beta user of Gmail. Yes, there was webmail before, but Google offered unlimited storage, which was a game-changer at the time. At this point in time, Google seemed like a company out to do good. They were helping the internet and the access to information grow very quickly, they were offering services for free, and accelerated the web by pushing for secure, fast browsing.

Maybe it's just me getting older and more cynical, but I'm done falling for their old "don't be evil" motto. It seems they are too, since they changed their motto in 2015.

Google Wants to Own the Web

Google has been taking a monopolistic and paternalistic approach to the web lately.

They have been pushing the AMP (Accelerated Mobile Pages) standard pretty hard lately. The stated goal of AMP, faster load times on mobile devices, sounds noble, but the real reason they push for it is the Google AMP Cache. When you visit an AMP page on mobile, the traffic goes through Google by design. Of course, the Google AMP Cache is free, but there's no such thing as free lunch, they just want to see all web traffic on mobile (no big deal, right?).

Ad-blocking software has become very common in web browsers, at about 11% of users worldwide [1]. That may not sound like a huge number, but if you are a company like Google who makes most of their money off of advertisements, that's huge.

Google has been making many efforts to make ads "more friendly" but "less annoying" to users. That means carefully treading between attention-getting and obnoxious. They've been pushing blocking autoplay videos (except on their own sites of course), and removing "intrusive ads" [2] on websites it deems do not "meet their criteria". These are clearly efforts to keep people from having a negative ad experience, so they don't install an actual ad-blocker like uBlock Origin. And you can be sure none of Google's ads will ever be stopped in Chrome.

Ads for Chrome

chrome ad

If you use YouTube, GMail, or Google in anything other than Chrome, you know what I'm referring to. Google begs and pleads for you to switch to Chrome. While I might have applauded this when people were stuck on old versions of IE, they now target evergreen browsers like Firefox and Edge, which are kept up to date and secure. It's very clear that their goal isn't just getting people on the web, or getting them into a secure browser anymore. It's funneling users to their services and sending them more ads.

Tracking is Creepy

One problem with Facebook, Google, and Amazon's ads is their pervasiveness. Your activity on any site with a Google ad is tracked by them. To be honest, I really do not care that the activity is tracked, as much as I care what is done with it. My ISP probably tracks my activity on the web, but they don't use it to sell ads to me.

What is so bad about selling ads? Nothing, if they are obvious, honest, and marginally effective, but Google's are none of these. Google strives to sell as many clicks as possible, and charge as much for those clicks as possible. If this means making ads (a little) less annoying to prevent you from "tipping" into using an ad-blocker, they will do that. If it means lobbying, they will do that. Google has grown into a massive, autonomous organization without a conscience.

Machine Learning Experts - To What End?

Google has been touting their prowess as the experts of machine learning - publishing many papers, libraries, and tools around it. The DeepDream thing was cool, but why are they working on machine learning so much? For self-driving cars? Probably not. Google is an advertising company first and foremost. The self-driving car ventures are simply an ad campaign for them. What Google really wants is to sell you ads.


The original ReCaptcha, if you don't remember, was touted as a way to archive newspapers and books using the power of crowdsourcing. You were shown a word they weren't quite sure how to classify, then you typed what it said. Wow, what a great philanthropic application of technology! Nowadays, however, you get to work for Google for free by classifying images for them. They could put up a task in Amazon Mechanical Turk for this, which would cost them a few pennies per image, but why would they when they have free labor at their disposal?

Anyway, I'm trying to wean myself off of Google's products. I've been using DuckDuckGo for search for about a year now and don't really miss Google search. If Apple made an iPhone for less than $500 I would use one. I am now moving over to Firefox, which is actually proving exciting with all the work they're putting into WebRenderer. Gmail will be a harder one to move away from, but if I'm still using uBlock Origin, I'm not doing them any favors.




Many science fiction writers have written about how an AI can go rogue, typically in self-preservation. Stories like Terminator or The Matrix. Fortunately today, there is very little risk of an AI taking over the world, but there are ethical dilemmas that need to be considered when we work with Machine Learning.

Especially when the influence of individual factors is difficult to understand, the output of machine learning systems can be obtuse. Do you think Facebook can tell you exactly why they showed you an ad for shampoo? Do you think high-frequency trading agencies have full understanding of their systems?

Case 1: Vehicle Emissions

Background: VW NOx scandal

As you are likely aware, in 2015, Volkswagen was accused of cheating on its emissions tests. Cars were found to behave differently in real-world conditions than when they were undergoing the EPA test regimen. The EPA fined Volkswagen for about $18 Billion. VW stuck with the story that a "rogue engineer" was responsible for this issue.

When he and his co-conspirators realized that they could not design a diesel engine that would meet the stricter US emissions standards, they designed and implemented [the defeat device] software [link]

Now, emissions software is actually a very likely candidate for machine learning optimizations. Many parameters can be tweaked and optimized to reduce diesel particulates to meet emissions standards. That standard, however, is a well-known test.

Let's look at an example, with a very simplified system, to show how a solution like the "rogue engineer" created might also arise with an "rogue" machine learning system.

The Contrived Example

This is egregiously simplified, but will demonstrate the concept.

Let's set up a scenario where some engineers are designing an engine system for a diesel car. They will put an EGR filter on the car to help it produce fewer NOx emissions. They use many complex equations to model the system. Let's mention a few facts that will be put into their model:

  • As the EGR filter is used more, the worse the engine performance will be, but the car will produce less NOx, which the EPA tests for.
  • The EPA tests for NOx at 2000 and 3000 RPM
  • As the engine speed increases, the more fuel it will use
  • As the engine gets cooler, it operates less efficiently
  • (many more)

Next, a machine learning algorithm is thrown at the model, with these goals in mind:

  • Pass the EPA test
  • Ensure high engine output power
  • Optimize for fuel economy
  • keep the engine running
  • keep the engine at a certain temperature
  • (many more)

Now, what do you think the exhaust gas recirculation rate will look like as a function of engine speed? The curve may look something like this:

it goes up at s=2000 and s=3000

Now, is the algorithm immoral? Is it "cheating"?

What if the engineers didn't actually see this plot, but just saw an obtuse set of constants in their code?

The point made above is that a machine learning system can generate behavior that might be considered immoral or "cheating" if it was intentionally designed by a person.

Case 2: Jim's Flight

Note: this is based on a scenario I saw in a comment on Hacker News, but I can't find it anymore.

Let's imagine a man named Jim in the not-so-distant future. Here are a couple of facts about Jim:

  • He is flying out to a conference today, somewhere far away, during the busy travel season
  • He doesn't like flying
  • He likes to drink
  • He booked the flight just three days in advance

Let's also imagine in this scenario that the airline he is flying on has revolutionized their customer service. They now offer personalized coupons, reminders, and notices delivered to your phone, based on advanced deep learning techniques. The airline's system looks at your Facebook likes to offer you special deals and coupons in the terminal. They even recognize your face to allow you to skip showing your boarding pass at various places, and can bill you for your food / souvenir purchases in the airport by recognizing your face.

Now, let's say Jim arrives at the terminal with about 15 minutes to spare. He is not looking forward to his 6-hour flight, and he heads over to the airport bar to grab a quick drink.

The system at the bar conveniently recognizes his face as he sits down, verifies he is of age, and bills him for his first drink without him even opening his wallet. After he sits for what feels like 5 minutes with his drink, his phone buzzes and offers him a coupon for another drink at half price. He accepts the offer and presents the coupon to the waiter and sips his second drink.

As Jim finishes his drink, he checks the time, and rushes to the terminal to board his plane. Unfortunately, he is too late, and the doors to the plane are already closed. The airline does not offer him a refund, because he was not on time to the doors. Unbeknownst to Jim, every seat on the plane was taken.

Jim blames himself for missing his flight. He is able to find a flight that leaves 2 hours later, and buys another ticket.

In hindsight, why did the system offer him a discounted drink? Why didn't it send him a reminder to board his plane when he still had time?

Machine learning algorithms have the ability to optimize for patterns that may not be obvious to humans working on the same problem. They can develop new strategies. In this case, the algorithm may have weighed a few facts, and acted in the airline's best interests:

Data Source
Jim was at the bar recognized with camera
Jim likes to drink, and doesn't like flying Jim's Facebook posts
Jim isn't very punctual Jim booked his flight late
The flight is overbooked Airline database
The airline will need to offer passengers vouchers or cash to "bump" them to a later flight policy/regulation

The cost of forcing Jim to take another flight would have cost hundreds of times more than a drink coupon. If the algorithm is optimizing for money, maximizing passenger throughput, and selling more products, this is one possible scenario.

Now again, is this behavior immoral? More importantly, how would anyone have known this behavior emerged at all?

Case 3: Predatory Lending

This one will be brief, because I'm assuming you get the idea by now.

If a person chooses to give a 30-year, extremely high rate mortgage to somebody with absolutely terrible credit, it is predatory. But if an algorithm does it, what is it? Is it wrong? Is it illegal? Who goes to jail?

In 2015 we had Parallax scrolling. Okay, parallax scrolling still exists, but I think most people agree it's tacky and overused. Let's move on.

Speaking as a user, this is my official list of irritating web design trends in 2016.

#5: Scrollwheel Smoothing

Example: Smoothwheel Library

Yeah, smooth scrolling was terrible a few years ago, but it still hasn't gone away. Why??? I want my browser to work the same way no matter which page I go to. And implementations of smooth scrolling are usually painfully slow, making this even worse. Stop it!

#4: Scroll Jumping

Example: Apple Watch

Apple's watch page isn't the worst example, but demonstrates the idea.

This is similar to my problem with smooth scrolling. When I hit the scroll wheel on my mouse, I expect the page to scroll down slightly. This is everybody's expectation. I don't want it to jump down an entire screen. Even worse, if I discover that your site works like this, I expect it to scroll consistently, in other words, move once per mousewheel stop. Inevitably doesn't work this way, and the page scrolls twice when I wanted it to happen once, or it doesn't scroll at all with one motion of the mousewheel.

#3: Infinite Scroll Done Wrong

Example: NodeBB

The way Facebook does infinite scroll works. It loads more content as you scroll, and doesn't remove content above you. Unfortunately, this isn't how infinite scroll is always implemented. Sometimes content is removed from the top of the page. This is infuriating when I want to just go back to the top of the page, either to get to the nav bar or see the early content

#2: Share Buttons when Highlighting Text

Example: Mashable

I am somebody who sometimes highlights text when I read a blog, news story et cetera. It's how I sometimes keep track of my place if I have to look away, or sometimes I just do it out of habit. What it doesn't mean is that I want to share a quote on Twitter, facebook, or anything.

This trend has even extended to include highlights of "most tweeted" sections of the article. But this is mostly on garbage trendy news sites anyway, so I don't run into it that often. I pray it doesn't spread.

#1: Half of a 1080p Screen is not a Mobile Device

Example: Node.js Docs

The progressive enhancement / graceful degredation / responsive design, whatever you want to call it trend is a good idea. Scratch that, it's a great idea. Make your webpage work on multiple different screen sizes. But that doesn't mean that a narrow viewport is a phone.

Windows, MacOS, and most linux distros have a viewport snapping feature which lets you snap a window to half of your screen. This is really useful for multitasking. But I don't want the mobile version of your website when I want to multitask.

Yes, even the nodejs docs are victims of this plague. Visit that link and make your browser about 800 pixels wide. The font size explodes and it becomes a chore to read. Move it over the threshold, and it becomed readable again.

The proper way to do scale-up on mobile is via the viewport meta tag, not a CSS media query.

Hamburger buttons do work fine with a mouse. It's a perfectly acceptable way to provide a menu on a narrow viewport on the desktop. But don't change your whole experience to blow up your icons