Member since 2017-07-15T03:50:57Z. Last seen 2025-01-02T20:05:01Z.
2729 blog posts. 128 comments.
14 astdb 2 hrs 1
Ant Financial announces a developer initiative in 2016. If you get into a car accident in China in the near future, you'll be able to pull out your smartphone, take a photo, and file an insurance claim with an AI system.
That system, from Ant Financial, will automatically decide how serious the ding was and process the claim accordingly with an insurer. It shows how the company—which already operates a hugely successful smartphone payments business in China—aims to upend many areas of personal finance using machine learning and AI.
The e-commerce giant Alibaba created Ant in 2014 to operate Alipay, a ubiquitous mobile payments service in China. If you have visited the country in recent years, then you have probably seen people paying for meals, taxi rides, and a whole lot more by scanning a code with the Alipay app. The system is far more popular than the wireless payments systems offered in the U.S. by Apple, Google, and others. The company boasts more than 450 million active users compared to about 12 million for Apple Pay.
Ant’s progress will be significant to the future of the financial industry beyond China, including in the U.S., where the company is expanding its interests. The company’s approach goes around existing institutions to target individuals and small businesses who lack access to conventional financial services. Ant said in April of this year that it is buying the U.S. money-transfer service MoneyGram for $880 million. The deal is subject to regulatory approval and should close in the second half of this year. The company could well apply the technologies it is developing to its overseas subsidiaries. A spokesperson for the company says it hasn’t brought Alipay to the U.S. because existing financial systems provide less of an opportunity.
Yuan (Alan) Qi, a vice president and chief data scientist at Ant, says the company’s AI research is shaping its growth. “AI is being used in almost every corner of Ant’s business,” he says. “We use it to optimize the business, and to generate new products.”
The accident-processing system is a good example of how advances in AI can flip an existing system on its head, Qi says. It has become possible to automate this kind of image processing in recent years using a machine-learning technology known as deep learning. By feeding thousands of example images into a very large neural network, it is possible to train it to recognize things that even a human may struggle to spot (see “10 Breakthrough Technologies 2013: Deep Learning”).
“We use computer vision for a job that is boring but also difficult,” Qi says. “I looked at the images myself, and I found it pretty difficult to tell the damage level.”
Qi speaks a mile a minute, which seems appropriate given how quickly his company seems to be moving. Dressed in a smart shirt and dress pants on a sweltering afternoon in Beijing this May, shortly after giving a speech at a major AI conference, Qi explained that the company considers itself not a “fintech” business but a “techfin” one, due to the importance of technology.
Ant already operates a range of other financial services besides Alipay. For instance, it provides small loans to those without a bank account. It assesses a person’s creditworthiness based on his or her spending history and other data including friends' credit scores (see “Alipay Leads a Financial Revolution in China”).
Ant’s creditworthiness system also provides a high-tech way to obtain various services, such as hotel bookings, without a deposit. Qi says that Ant uses advanced machine-learning algorithms and custom programmable chips to crunch huge quantities of user data in a few seconds, to determine whether to grant a customer a loan, for instance.
A recent hire offers some measure of Ant’s intent to apply artificial intelligence to finance. This May the company announced that Michael Jordan, a professor at the University of Berkeley and a major figure in the field of machine learning and statistics, would become chair of the company’s scientific board.
Qi is no slouch, either. He got his PhD from MIT and became a professor in the computer science department at Purdue before joining Alibaba in 2014. Once there, he developed Alibaba’s first voice-recognition system for automating customer calls.
“We built a system, based on deep learning, to carry on conversations; to provide answers to your questions,” Qi says. This chatbot system also taps into a knowledge base of information created by Ant, and is an example of how researchers are increasingly combining cutting-edge machine-learning techniques with conventional representations of knowledge. “Human language is still very hard for a machine to understand,” Qi says.
Recommended for You
In March this year, the chatbot system surpassed human performance in terms of customer satisfaction, says Qi. “There are many, many chatbot companies in Silicon Valley. We are the only one that can say, confidently, they do better than human beings,” he says.
Ant’s success to date has certainly been impressive. Credit Suisse estimates that it manages 58 percent of mobile payments in China. A key competitor has emerged in recent years with WeixinPay, from the mobile chat giant Tencent, now accounting for almost 40 percent of the market. Ant remains enormously valuable, though. Earlier this year, a Hong Kong investment group valued the company at $75 billion. The company was expected make an initial public offering this year, but that now looks more likely to happen in 2018.
Ant is also increasingly looking to expand its interests overseas. The company has invested almost $1 billion in Paytm, an Indian payments company. It has also invested in Ascend, a Thai online payments business, and M-Daq, a Singaporean financial business. Ant apparently also sees investments and acquisitions as a way to bolster its technological prowess. Last year the company acquired EyeVerify, a U.S. company that makes eye recognition software.
145 sridca 8 hrs 81
I’ve written in the past (twice) about how to streamline the writing process when using LaTeX. Since then, I’ve found that I enjoy writing even more when I don’t have to reach for LaTeX at all. By reaching first for Markdown, then for LaTeX when necessary, writing is easier and more enjoyable.
Writing at the Command Line
Last year, I gave a talk about the merits of writing primarily at the command line. My main claims were that when writing we want:
an open document format (so that our writings are future proof) to be using open source software (for considerations of privacy and cost) to optimize for the “common case” to be able to write for print and digital (PDFs, web pages, etc.) Markdown solves these constraints nicely:
It’s a plain text format—plain text has been around for decades and will be for decades more. Given a plain text format, we can bring our own text editor. Plenty of open source programs manipulate Markdown. When we need advanced features, we can mix LaTeX into our Markdown documents. For those unfamiliar with Markdown, it’s super quick to pick up. If you only look at one guide, see this one:
If you want to start comparing features available in certain implementations of Markdown:
GitHub Flavored Markdown Markdown.pl Pandoc Markdown For more on why you should want to be writing at the command line, you can check out the talk slides.
Pandoc Starter
The central tool I spoke about in Writing at the Command Line is Pandoc. Pandoc is an amazingly simple command line program that takes in Markdown files and spits out really anything you can think of.
To make using Pandoc even easier than it already is, I put together a collection of starter templates. They’re all available on Github if you’d prefer to dive right in.
There are currently six different templates, specialized for the kind of document you’d like to create. Each has a README for installation and usage instructions, as well as a Makefile for invoking pandoc correctly.
All the templates generate PDFs from Markdown by way of LaTeX. In addition to Pandoc, you’ll also need LaTeX installed locally.
article
This template uses the standard LaTeX article document class. It’s a no frills, no nonsense choice.
article template
tufte-handout
As an alternative to the article document class, there’s also the tufte-handout document class. It originates from the style Edward Tufte popularized in his books and articles on visualization and design.
Apart from a different font (it uses Palatino instead of the default Computer Modern), this template features the ability add side notes to your documents. I often find myself reaching for this template when I want to disguise the fact that I’m secretly using LaTeX.
tufte-handout template
homework
A second alternative to the article document class is the homework document class. It works nicely for homework assignments and problem sets. The class itself has a number of handy features, like:
the option to put your name on every page, or only on the first page an option to use wide or narrow margins most of the AMS Math packages you’d include in the process of typesetting a math assignment a convenient environment for typesetting induction proofs For more features and usage information, check out this blog post or the source on GitHub.
homework template
beamer
LaTeX provides the beamer document class for creating slides; this template makes it even easier to use:
Make a new slide with a “##” header Make a section divider with a “#” header Mix lists, links, code, and other Markdown features you’re familiar with to create the content for a slide. So basically, just write the outline for your talk, and Pandoc takes care of making the slides—it doesn’t get much simpler.
beamer template
beamer-solarized
The default beamer styles are pretty boring. To add a bit of flair and personality to my slide decks, I made a Solarized theme for beamer.
In addition to the screenshot below, the Writing at the Command Line slides I linked to earlier also use this theme, if you want to see a less contrived example.
beamer solarized template
book-writeup
Finally, sometimes a simple article or slide deck doesn’t cut it. Usually this means I’d like to group the writing into chapters. This template makes writing a chapter as easy as using a “#” Markdown header.
book writeup template
Writing Plugins for Vim
If you happen to use Vim, I’d highly recommend installing goyo.vim for writing. It removes all the visual frills Vim includes to make writing code easier so you can focus on your writing without distractions.
I also really enjoy vim-pandoc and vim-pandoc-syntax. They’re a pair of complementary plugins for highlighting and working with Pandoc Markdown-flavored documents. They work so well that I use them for Markdown documents even when not using Pandoc.
Reach for Markdown
Writing should be a pleasant experience. With the right tools, it can be. LaTeX is powerful but cumbersome to use. With Markdown, we can focus on our writing, and worry about the presentation later. Pandoc can take care of the presentation for us, so the only thing left to do is start.
It’s really cool how composition works with lenses! Continue reading
Published on February 06, 2018 Published on January 13, 2018
120 ingve 8 hrs 31
https://www.haroldserrano.com/blog/books-i-used-to-develop-a-game-engine
If you have decided to develop your game engine, you may be wondering where to start, what books to start reading, etc. If you are in this situation, I recommend getting a copy of the following books found here. Out of all the books I've read, they are the best of the best, so I strongly recommend them.
Now, if you want a comprehensive list, the list below should help you.
Books to develop the Math Engine
Books to develop the Rendering Engine
Books to develop the Physics Engine
If you are wondering why you need to learn Blender, read the following article.
Game Engine Architecture
Hope it helps.
38 adnzzzzZ 1 hr 5
You can't perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
458 spiralpolitik 1 day 323
http://audreyii-fic.tumblr.com/post/170886347853/the-entirely-unnecessary-demise-of-barnes-noble
“Whether the Andrea Gail rolls, pitch-poles, or gets driven down, she winds up, one way or another, in a position from which she cannot recover. Among marine architects this is known as the zero-moment point – the point of no return.” –Sebastian Junger, “The Perfect Storm”
Posts like this aren’t my usual fare, but there’s a lot of readers on Tumblr. So y’all might be interested – or, if not, you really should be.
On Monday, this went down:
image image That’s the bloodless, matter-of-fact, ho-hum business event way of describing it. Let me paint you a different picture.
On Monday morning, every single Barnes & Noble location – that’s 781 stores – told their full-time employees to pack up and leave. The eliminated positions were as follows: the head cashiers (those are the people responsible for handling the money), the receiving managers (the people responsible for bringing in product and making sure it goes where it should), the digital leads (the people responsible for solving Nook problems), the newsstand leads (the people responsible for distributing the magazines), and the bargain leads (the people responsible for keeping up the massive discount sections). A few of the larger stores were able to spare their head cashiers and their receiving managers, but not many.
Just about everyone lost between 3 and 7 employees. The unofficial numbers put the total around 1,800 people.
People.
image image We’re not talking post-holiday culling of seasonal workers. This was the Red Wedding. Every person laid off was a full-time employee. These were people for whom Barnes & Noble was a career. Most of them had given 5, 10, 20 years to the company. In most cases it was their sole source of income.
There was no warning.
But it gets worse.
The people who lost their jobs had been actively assured this would NOT happen for the past several months. Home Office decided last year that these positions – head cashiers, receiving managers, leads – were due to be eliminated… but no layoffs were to take place. All current employees were to be grandfathered in. The positions wouldn’t go away until the people currently holding them chose to leave.
For months they told everyone this.
Then on Monday, each person was called into the manager’s office. Fifteen minutes later, each person gathered up their things and left.
image Severance packages varied; usually as little as two weeks pay. Demotions were not offered. Those laid off – and I’d like to reiterate that these were career employees – were offered the opportunity to reapply for part-time positions. At base pay.
“Thanks for your 17 years of service. If you’d like to come back in two months to work registers for minimum wage, we’d love to have you.”
(Don’t let the glamor of book sales fool you. B&N entry pay is minimum wage.)
Now. You’re going to hear a lot about the dropped sales. You’re going to hear a lot about Amazon. You’re going to hear about how there was no way around this.
That, dear friends, is bullshit.
image image I want to talk to you about why sales dropped during the holidays.
A company wants cash on hand to look good to stockholders. The quickest way for a company to get cash on hand is to cut back on payroll. Which Barnes & Noble did this holiday season. During December, staffing at most locations was no different than it would be on your average day in June.
Something kinda important is happening in December, btw.
image In particular, hours in receiving were carved to the bone. You know what that means? It means that product – product that could be selling – sat unopened in boxes. In many cases, those boxes had already been logged into the system. The computers showed we had them. Customers came in, expecting to purchase things, knowing they were in the store! But what they wanted was buried under 100, 200 boxes. And there were no employees to find them. There were barely any employees available at all.
Customers went away annoyed. And they shopped on Amazon instead.
Because, well, why not?
(As a side note: people often want to know why Amazon’s prices are so much lower than B&N’s, and why B&N doesn’t price match. There’s a lot of different reasons, but the biggest is that Amazon loss-leads their books: that is, sells them at a loss, then makes up the money with expensive add-ons, like Echos or Kindles or other non-book stuff.
image Barnes & Noble only has books; they can’t make up the discount loss by selling water coolers. Amazon undercuts prices, and once they’re the only players in the market, that will stop. Just so you know.)
Yeah, sales sucked. Cutting hours during the busiest season of the year punched quarter earnings in the gut. Big shock.
So, if you were trying to solve the problem – if you were trying to revitalize this business – what would you do?
“Oh, I know! I would tighten my belt at the executive level, then I would double-down on what we can offer that Amazon can’t: enthusiastic staff that can find and upsell books to suit each customer, and the largest in-store selection possible so that everyone who comes in can walk away with what they want. If I absolutely had no choice but to reduce payroll, I would eliminate some part-time positions, and count on the knowledge and experience of our veterans to get us through the lean times.”
image Those are definitely the choices you would make if you wanted to rebuild a company to last.
But here’s a secret:
The Barnes & Noble executives do not intend to rebuild.
How do I know this? Because every decision from the upper levels is being made solely to increase cash on hand.
There’s been so many things – so many things – but me tell you about the canary in the coal mine. Let me tell you how I know saving Barnes & Noble is not in the home office’s plans.
Last summer, the decision was made to switch to “ship from store”. Previously, when a customer ordered a book online, the book would be shipped to them from one of our warehouses. The new policy, however, had stores taking books off their shelves, packaging them up, and sending them out each day.
This was to “decrease shipping time by sending books from the closest location to the customer.”
(Spoiler alert: that wasn’t true.)
So each store takes employees off the selling floor – where they could, you know, help customers – and sets them to fill orders. The stores remove books from their own shelves and mail them out.
The stores do not get credit for those sales.
Let me repeat:
The stores do not get credit for those sales.
The company makes money. The brick and mortar store – which Barnes & Noble is based on – loses the opportunity to sell that book (pissing off customers), and gets nothing in return.
Which hurts the bottom line of that store.
“Uh-oh, your sales dropped. Better cut back hours.”
This is a decision that is only made if the executive level of a company is no longer interested in helping their business. This is a decision that is made only if the executive level has decided the company is dying, and don’t care if they hasten along the demise as long as they can harvest the organs for themselves and leave everyone else with the shriveled husk.
Speaking of organ harvesters.
image By the way, it should be noted that the last CEO, who worked for Barnes & Noble for less than a year, received a $4,500,000 payout. The CEO before him, who also worked at Barnes & Noble for less than a year, received a $10,000,000 payout.
image The company saved $40m by firing 1,800 employees.
After paying out $14,500,000 to two executives.
Which brings us back around to Monday’s layoffs.
By getting rid of their most expensive (ie, most experienced) workers, B&N was able to replace said workers with part-time, benefit-less, minimum-wage employees. This is not to knock newbies – we were all ones once! – but a new hire can’t do what a 15 year veteran can. They just can’t. Not right away. Not for a long time. And not in the specialized departments that were laid off: head cashiering, receiving, digital, newsstand, bargain. Those are hard jobs that take training.
Who’s going to train?
And who’s going to want to be trained for something that hard, when there’s no possibility of a promotion down the line? When you’re only working minimum wage for a maximum of 25 hours a week?
At the beginning of this post, I quoted Junger and the zero-moment point. The loss of these veterans, and the positions they worked, cannot be recovered from. Barnes & Noble is going under.
image “That sucks, but it’s the way capitalism works. Hate the game, not the player.”
Sure. Maybe.
But this.
Except this.
On Monday morning, as thousands of lives were upended, Barnes & Noble – literally in the same hour – released this:
image image “His deep knowledge of retail and proven track record are exactly what we need to invigorate our merchandising strategy and grow our business.”
Grow our business.
image I don’t know what happens after Barnes & Noble sinks. It’s all well and good to say “Support indie stores!” but there are huge swaths of America where there aren’t any. B&N is the last thing standing between Amazon and a total monopoly of the publishing industry, and a monopoly is never a good thing. But the entire book world needs to be prepared, because it’s coming.
The zero-moment point came Monday, and in the crassest, cruelest, more heartless way possible.
Barnes & Noble has slit its own wrists. Now we just wait to bleed out.
studies 47 jonbaer 10 hrs 12
Last year, computer scientists at the University of Montreal (U of M) in Canada were eager to show off a new speech recognition algorithm, and they wanted to compare it to a benchmark, an algorithm from a well-known scientist. The only problem: The benchmark's source code wasn't published. The researchers had to recreate it from the published description. But they couldn't get their version to match the benchmark's claimed performance, says Nan Rosemary Ke, a Ph.D. student in the U of M lab. "We tried for 2 months and we couldn't get anywhere close."
The booming field of artificial intelligence (AI) is grappling with a replication crisis, much like the ones that have afflicted psychology, medicine, and other fields over the past decade. AI researchers have found it difficult to reproduce many key results, and that is leading to a new conscientiousness about research methods and publication protocols. "I think people outside the field might assume that because we have code, reproducibility is kind of guaranteed," says Nicolas Rougier, a computational neuroscientist at France's National Institute for Research in Computer Science and Automation in Bordeaux. "Far from it." Last week, at a meeting of the Association for the Advancement of Artificial Intelligence (AAAI) in New Orleans, Louisiana, reproducibility was on the agenda, with some teams diagnosing the problem—and one laying out tools to mitigate it.
The most basic problem is that researchers often don't share their source code. At the AAAI meeting, Odd Erik Gundersen, a computer scientist at the Norwegian University of Science and Technology in Trondheim, reported the results of a survey of 400 algorithms presented in papers at two top AI conferences in the past few years. He found that only 6% of the presenters shared the algorithm's code. Only a third shared the data they tested their algorithms on, and just half shared "pseudocode"—a limited summary of an algorithm. (In many cases, code is also absent from AI papers published in journals, including Science and Nature.)
Researchers say there are many reasons for the missing details: The code might be a work in progress, owned by a company, or held tightly by a researcher eager to stay ahead of the competition. It might be dependent on other code, itself unpublished. Or it might be that the code is simply lost, on a crashed disk or stolen laptop—what Rougier calls the "my dog ate my program" problem.
Assuming you can get and run the original code, it still might not do what you expect. In the area of AI called machine learning, in which computers derive expertise from experience, the training data for an algorithm can influence its performance. Ke suspects that not knowing the training for the speech-recognition benchmark was what tripped up her group. "There's randomness from one run to another," she says. You can get "really, really lucky and have one run with a really good number," she adds. "That's usually what people report."
In a survey of 400 artificial intelligence papers presented at major conferences, just 6% included code for the papers' algorithms. Some 30% included test data, whereas 54% included pseudocode, a limited summary of an algorithm.
CREDITS: (GRAPHIC) E. HAND/SCIENCE; (DATA) GUNDERSEN AND KJENSMO, ASSOCIATION FOR THE ADVANCEMENT OF ARTIFICIAL INTELLIGENCE 2018
At the AAAI meeting, Peter Henderson, a computer scientist at McGill University in Montreal, showed that the performance of AIs designed to learn by trial and error is highly sensitive not only to the exact code used, but also to the random numbers generated to kick off training, and to "hyperparameters"—settings that are not core to the algorithm but that affect how quickly it learns. He ran several of these "reinforcement learning" algorithms under different conditions and found wildly different results. For example, a virtual "half-cheetah"—a stick figure used in motion algorithms—could learn to sprint in one test but would flail around on the floor in another. Henderson says researchers should document more of these key details. "We're trying to push the field to have better experimental procedures, better evaluation methods," he says.
Henderson's experiment was conducted in a test bed for reinforcement learning algorithms called Gym, created by OpenAI, a nonprofit based in San Francisco, California. John Schulman, a computer scientist at OpenAI who helped create Gym, says that it helps standardize experiments. "Before Gym, a lot of people were working on reinforcement learning, but everyone kind of cooked up their own environments for their experiments, and that made it hard to compare results across papers," he says.
IBM Research presented another tool at the AAAI meeting to aid replication: a system for recreating unpublished source code automatically, saving researchers days or weeks of effort. It's a neural network—a machine learning algorithm made of layers of small computational units, analogous to neurons—that is designed to recreate other neural networks. It scans an AI research paper looking for a chart or diagram describing a neural net, parses those data into layers and connections, and generates the network in new code. The tool has now reproduced hundreds of published neural networks, and IBM is planning to make them available in an open, online repository.
Joaquin Vanschoren, a computer scientist at Eindhoven University of Technology in the Netherlands, has created another repository for would-be replicators: a website called OpenML. It hosts not only algorithms, but also data sets and more than 8 million experimental runs with all their attendant details. "The exact way that you run your experiments is full of undocumented assumptions and decisions," Vanschoren says. "A lot of this detail never makes it into papers."
Psychology has dealt with its reproducibility crisis in part by creating a culture that favors replication, and AI is starting to do the same. In 2015, Rougier helped start ReScience, a computer science journal dedicated to replications. The large Neural Information Processing Systems conference has started linking from its website to papers' source code when available. And Ke is helping organize a "reproducibility challenge," in which researchers are invited to try to replicate papers submitted for an upcoming conference. Ke says nearly 100 replications are in progress, mostly by students, who may receive academic credit for their efforts.
Yet AI researchers say the incentives are still not aligned with reproducibility. They don't have time to test algorithms under every condition, or the space in articles to document every hyperparameter they tried. They feel pressure to publish quickly, given that many papers are posted online to arXiv every day without peer review. And many are reluctant to report failed replications. At ReScience, for example, all the published replications have so far been positive. Rougier says he's been told of failed attempts, but young researchers often don't want to be seen as criticizing senior researchers. That's one reason why Ke declined to name the researcher behind the speech recognition algorithm she wanted to use as a benchmark.
Gundersen says the culture needs to change. "It's not about shaming," he says. "It's just about being honest."
365 nancyhua 11 hrs 75
http://www.pathsensitive.com/2018/01/the-benjamin-franklin-method-of-reading.html
Let’s face it, programming books suck. Those general books on distributed systems or data science or whatever can be tomes for a lifetime, but, with few exceptions, there’s something about the books on how to write code in a language/framework/database/cupcake-maker, the ones with the animal covers and the cutesy sample apps, they just tend to be so forgettable, so trite, so….uneducational.
I think I’ve figured out why I don’t like them, and it’s not just that they teach skills rapidly approaching expiration. It’s their pedagogical approach. The teaching algorithm seems to be: write these programs where we’ve told you everything to do, and you’ll come out knowing this language/framework/database/cupcake-maker. Central to these books are the long code listings for the reader to reproduce. Here’s an example, from one of the better books in this category
class User < ApplicationRecord attr_accessor :remember_token before_save { self.email = email.downcase } validates :name, presence: true, length: { maximum: 50 } VALID_EMAIL_REGEX = /\A[\w+-.]+@[a-z\d-.]+.[a-z]+\z/i validates :email, presence: true, length: { maximum: 255 }, format: { with: VALID_EMAIL_REGEX }, uniqueness: { case_sensitive: false } has_secure_password validates :password, presence: true, length: { minimum: 6 } # …another 30 lines follows... end Traditionally, there are two ways to study a page like this:
Type out every line of code Copy+paste the code from their website, maybe play around and make small changes Approach #1 is a method that, like a lecture, causes the code to go from the author’s page to the reader’s screen without passing through the heads of either. The second is like trying to learn how to make a car by taking apart a seatbelt and stereo: you’re just toying with small pieces. Neither is a sound way to learn.
If you had an expert tutor, they wouldn’t teach you by handing you a page of code. Still, these books are what we have. How can we read them in a way that follows the principles of learning? Read on.
Mental Representations
According to K. Anders Ericsson in his book Peak, expertise is a process of building mental representations. We can see this because expert minds store knowledge in a compressed fashion. Musicians can memorize a page of music far faster than a page of random notes. Expert chess players told to memorize a board position will do much better than amateurs, but, when they make a mistake, they’ll misplace whole groups of pieces.
This is possible because music and chess positions have structure that makes them look very different from a page of random notes or a random permutation of pieces. Technically speaking, they have lower perplexity than random noise. So, even though there are 26 letters in the English alphabet, Claude Shannon showed that the information content of English is about 1 bit per letter: given a random prefix of a paragraph, people can guess the next letter about half the time.
This is why a programmer skilled in a technology can look at code using it and read through it like fiction, only pausing at the surprising bits, while the novice is laboring line-by-line. This is also why a smart code-completion tool can guess a long sequence of code from the first couple lines. With a better mental representation, understanding code is simply less work.
(How do these mental representations work? My officemate Zenna Tavares argues they are distribution-sensitive data structures.)
This is exactly what’s missing from the “just type out the code” approach: there’s nothing forcing your mind to represent the program as anything better than a sequence of characters. Yet being able to force your mind to do this would mean being able to learn concepts more rapidly. Here’s a 200 year-old idea for doing so.
The Benjamin Franklin Method
I don’t know what’s more impressive: that Benjamin Franklin became a luminary in everything from politics to physics, or that he did this without modern educational techniques such as schools, teachers, or StackOverflow. As part of this, he discovered a powerful method of self-study. I’ll let him speak for himself (or go read someone else’s summary).
About this time I met with an odd volume of the Spectator. It was the third. I had never before seen any of them. I bought it, read it over and over, and was much delighted with it. I thought the writing excellent, and wished, if possible, to imitate it. With this view I took some of the papers, and, making short hints of the sentiment in each sentence, laid them by a few days, and then, without looking at the book, try'd to compleat the papers again, by expressing each hinted sentiment at length, and as fully as it had been expressed before, in any suitable words that should come to hand. Then I compared my Spectator with the original, discovered some of my faults, and corrected them.
—Benjamin Franklin, Autobiography This process is a little bit like being a human autoencoder. An autoencoder is a neural network that tries to produce output the same as its input, but passing through an intermediate layer which is too small to fully represent the data. In doing so, it’s forced to learn a more compact representation. Here, the neural net in question is that den of dendrons in your head.
K. Anders Ericsson likens it to how artists practice by trying to imitate some famous work. Mathematicians are taught to attempt to prove most theorems themselves when reading a book or paper --- even if they can’t, they’ll have an easier time compressing the proof to its basic insight. I used this process to get a better eye for graphical design; it was like LASIK.
But the basic version idea applied to programming books is particularly simple yet effective.
Here’s how it works:
Read your programming book as normal. When you get to a code sample, read it over
Then close the book.
Then try to type it up.
Simple, right? But try it and watch as you’re forced to learn some of the structure of the code.
It’s a lot like the way you may have already been doing it, just with more learning.
Acknowledgments
Thanks to Andrew Sheng and Billy Moses for comments on previous drafts of this post.
Building Your Own CDN for Fun and Profit
177 janoszen 10 hrs 47
https://pasztor.at/blog/building-your-own-cdn
news.ycombinator.com/item?id=16374645
As you can (hopefully) see from this site, I like my pages fast. Very, very fast. Now, before we jump into this, let me be very clear about it: using a CDN will only get you so far. If your site is slow because of shoddy frontend work, a CDN isn’t going to help you much. You need to get your frontend work right first. However, once you’ve optimized everything you could, it’s time to look at content delivery.
My main problem was that even though you could get the inital website load with a single HTTP request, my server being hosted in Frankfurt, the folks from Australia still had to wait up to 2-3 seconds to get it. Round trip times of over 300 ms and a lot of providers inbetween made the page load just like any other Wordpress page.
So what can we do about it? One solution, of course, would be the use of a traditional CDN. However, most commercial CDNs pull the data from your server on request and then cache it for a while.
PlantUML SVG diagram
However, the initial page load is slower with a CDN than without it, since the CDN is a slight detour for the content. This is not a problem if you have a high traffic site since the content stays in the cache all the time. If, on the other hand, you are running a small blog like I do, the content drops out of the cache pretty much all the time. So, in effect, a traditional pull-CDN would make this site slower. I could, of course, use a push-CDN where I can upload the content directly, but those seem to be quite pricey in comparison to what I’m about to build.
How do CDNs work?
Our plan is clear: on our path to world domination we need to make our content available everywhere fast. That means our content needs to be close to the audience. Conveniently, there are a lot of cloud providers that offer cheap virtual servers in multiple regions. We can just put our content on, say, 6 servers and we’re good, right?
Well, not so fast. How is the user going to be routed to the right server? Let’s take a look at the process of actually getting a site. First, the users browser uses the Domain Name System (DNS) to look up the IP address of the website. Once it has the IP address, it can connect the website and download the requested page.
PlantUML SVG diagram
If we think about it as simple as this, the solution is quite simple: we need a smart DNS server that does a GeoIP lookup on the requesting IP address and returns the IP address closest to it. And indeed, that’s (almost) how commercial CDNs do it. There is a bit more engineering involved, like measuring latencies, but this is basically how it’s done.
Making the DNS servers fast
Now the next question arises: how do we make the DNS server fast? Getting the website download to go to the closest node is only half the job, if the DNS lookup has to go all the way around the planet, that’s still a HUGE lag.
As it turns out, the infrastructure underpinning the internet is uniquely suitable to solve this problem. Network providers use the Border Gateway Protocol to tell each other which networks they can reach and how many hops away they are. The end user ISP then, in most cases, takes the shortest route to reach the destination.
If we now advertise the IP addresses in multiple locations, the DNS request will always be routed to the closest node. This is called BGP Anycast.
Why not use BGP Anycast for the website download?
Wait, hold on, if we can do this, why don’t we simply use BGP to route the web traffic? Well, there are three reasons.
First of all, doing BGP Anycast requires control over the network hardware and a pool of at least 256 IP addresses, which is way over our budget.
Second, BGP routes are not that stable. While DNS requests only require a single packet to be sent in both directions, HTTP (web) requests require establishing a connection to download the content. If the route changes, the HTTP connection is broken.
And finally, the lowest count of hops, which is the basis of BGP route calculations, does not guarantee the lowest round trip time. A hop across the ocean may be just one hop, but it’s a damn long one.
Further reading: Linkedin Engineering has a wonderful blog post about this topic.
Setting up DNS
Since we have established that we can’t run our own BGP Anycast, this means we can also not run our own DNS servers. So let’s go shopping! … OK, as it turns out, DNS providers that offer BGP Anycast servers and latency-based routing are a little hard to come by. During my search I found only two, the rather pricey Dyn and the dirt-cheap Amazon Route53.
Since we are cheap, Route53 it is. We add our domain and then start setting up the IPs for our machines. We need as many DNS records as we have servers around the globe (edge locations), and each record should look like this:
Route53 latency-based routing should be set up in Route53 by creating A records with the IP of the edge location, and then setting the routing policy to "latency". The set ID should be something unique, and the location should be the one closest to our edge location. Tip: it is useful to set up a health check for each of the edge locations so they are removed if they go down.
Distributing content
The next issue we need to tackle is distributing content. Each of your edge nodes needs to have the same content. If you are using a static site generator like Jekyll, your task is easy: simply copy the generated HTML files on all servers. Something as simple as rsync might just do the trick.
If you want to use a content editing system like Wordpress, you have a significantly harder job since it is not built to run on a CDN. It can be done, but it’s not without its drawbacks, and the distribution of static content is still a problem. You may have to create a distributed object storage for that to fully work.
Using SSL/TLS certificates
The next pain point is using SSL/TLS certificates. Actually, let’s call them what they are: x509 certificates. Each of your edge locations needs to have a valid certificate for your domain. The simple solution, of course, is to use LetsEncrypt to generate a different certificate for each, but you have to be careful. LE has a rate limit, which I ran into on one of my edge nodes. In fact, I had to take the London node down for the time being until the weekly limit expires.
However, I am using Traefik as my proxy of choice, which supports using a distributed key-value store or even Apache Zookeeper as the backend for synchronization. While this requires a bit more engineering, it is probably a lot more stable in the long run.
The results
Time for the truth, how does my CDN perform? Using this tool, let’s see some global stats:
Oregon: 246ms, California: 298ms, Ohio: 227ms, Virginia: 108ms, Ireland: 217ms, Frankfurt: 44ms, London: 110ms, Mumbai: 870ms, Singapore: 517ms, Seoul: 253ms, Tokyo: 150ms, Sidney: 358ms, Sao Paulo: 911ms As you can see, the results are pretty decent. I might need two more nodes, one in Asia and one in South America to get better load times there.
Frequently asked questions
When I do projects like this, people usually ask me: “Why do you do this? You must like pain.” Yes, to some extent I like doing things differently just for the sake of exploring new options and technologies, building your own CDN may make a lot of sense. Let’s address some of the questions about this setup.
Let’s be clear: if a commercial provider comes out with an affordable push CDN that allows me to do nice URLs, SSL and custom headers, I’ll absolutely throw money at them and stop running my own infrastructure. As fun as it was to build, I have enough servers to run without this.
Why don’t you just use CloudFlare?
CloudFlare is a wonderful tool for many, but as outlined above, CDNs drop unused content from their cache. On other sites that I’m managing I see a cache rate of about 75% with the correct setup. Having your own CDN means 100% of the content is always in cache, and there are no additional round trips to the origin server.
Why don’t you use S3 and CloudFront?
Amazon S3 has an option to host static websites, and it works in conjunction with CloudFront. However, it does not allow you to set custom headers for caching, nice URLs, etc. For that, you need Lambda@Edge, a tool that lets you run code on the CloudFront edge nodes. Lambda@Edge, however, has the same problem as CDNs: if it doesn’t receive requests for a certain time, the container running it is shut down and needs up to a second to boot up.
Why don’t you use Google AMP?
Google AMP only brings benefits when people visit your site from the Google search engine. My most of my traffic does not come from Google so that won’t solve the problem. So it really only benefits Google, nobody else. Oh, and I’m perfectly capable of building a fast website without the dumbed down HTML they offer.
Who cares? 3 seconds is a wonderful load time!
I’m a DevOps engineer who specializes in delivering content. If anyone, I should have a website that’s fast around the globe, no?
Oh, and I like to flip Google AMP off because it’s a terrible technology. Not that they’d care.
Build your own
Now it’s up to you: do you want to build your own CDN? The source code for mine is right there on my GitHub. Go nuts!
5 buffyoda 13 hrs 0
You don’t understand blockchain.
Well, maybe you do. But if you don’t see what all the fuss is about, wonder why anyone uses blockchain technology instead of Postgres, or think Tor figured out decentralization long before Bitcoin, then I have some news for you:
Every damn thing you know about the blockchain is wrong.
In this post, I going to talk about what I see as the top 8 myths of blockchain. Be prepared for bold claims and new ways of looking at the space.
Myth 1: Blockchain Is Digital Currencies
Many early applications of blockchain technology have been directed at the creation of digital assets that can be used as currencies (more precisely: de-centralized, consensus-driven, append-only ledgers).
Blockchain technology itself, however, is neither about nor restricted to the creation of digital currencies.
In the most general possible sense, blockchain technology refers to a mathematical innovation that allows us to incentivize independent parties in an untrusted, purely consensual network to provide well-defined, agreed upon services.
“Blockchain technology refers to a mathematical innovation that allows us to incentivize independent parties in an untrusted, purely consensual network to provide well-defined, agreed upon services.”
It is hard to overstate just how important this innovation is. One of the oldest problems in the history of civilization is figuring out how to get different parties to work together.
Previously, legal markets have heavily relied on physical force to incentivize independent parties to provide (contractually) agreed upon services. Indeed, this is one of the important functions of governments.
Force works reasonably well for trustworthy parties in the same jurisdiction, but is costly and slow. Force doesn’t work in cases where the parties are untrusted, in cases where the parties span different jurisdictions, or in cases where speed or low costs are critical.
Blockchain technology provides a shocking answer to one of civilization’s oldest questions. Without force—indeed, with just math—we can engineer cooperation between different parties in a well-defined way.
Classifying the set of viable blockchain solutions is not trivial, but it’s clear based on this definition that any problem uniquely solved by blockchain technology will involve some combination of untrusted parties, multiple jurisdictions, high speed, and low costs.
It should also be clear that some solutions currently marketed as blockchain solutions are not actually blockchain solutions, per se, even though they share some of the underlying math (zero-knowledge proofs, homomorphic computing, lattice cryptography, etc). More precisely, all blockchain tech involves cryptographic tech, but not all cryptographic tech involves blockchain tech.
Myth 2: Tokens Are Currencies
Tokens, such as Bitcoins, can be used as currencies, but fundamentally, tokens are not currencies, but capabilities.
Possession of a token gives you the capability to avail yourself of the well-defined, agreed upon services provided by a blockchain technology. Incidentally, people may be willing to give you some form of currency in order to acquire a token, if they want those services, or if they are speculating on the value of the tokens.
For pure cryptocurrencies such as Bitcoin, the services revolve around providing a global, distributed ledger, which blurs the line between capability and currency. But the world doesn’t need more than a few pure cryptocurrencies, so the majority of successful blockchain applications will not be pure cryptocurrencies.
A better analogy for tokens is corporate stock: just like stock gives you capabilities to avail yourself of the services provided by the corporation to stockholders (such as the right to vote, right to dividends, etc.), tokens let you avail yourself of the services provided by the blockchain platform.
Stock usually has real value, because you can often exchange it for money, but it’s less a currency than it is a set of rights to do something.
Myth 3: Blockchain Isn’t Scalable
Today’s blockchains are generally not scalable. Bitcoin can handle a few transactions per second. Ethereum can handle about five times that amount. To give you a sense for how terrible this performance is, Visa handles 65,000 transactions per second!
Now, since tokens are not currency, blockchain platforms don’t necessarily need to scale to the level of payment systems. That said, it’s clear numerous applications require performance orders of magnitude greater than currently offered, as well as far cheaper storage and compute than currently possible on any existing blockchain.
Nonetheless, there are no theoretical reasons why even public blockchain technology cannot scale. Ongoing work in proof-of-stake consensus algorithms, sharding, efficient routing, reputation networks and heterogeneous (specialized) networks, all provide a clear (if difficult) path toward building highly scalable blockchain platforms.
Blockchain platforms can be thought of as highly-constrained, specialized databases. First generation systems are architecturally similar to flat file managers (how data was stored prior to databases). Later systems will be more architecturally similar to highly-scalable databases like Google Spanner.
Myth 4: Blockchain Platforms Will Replicate Tech Stacks
Many people who are sold on the promise of blockchain technology imagine that the distributed applications of tomorrow will be built on blockchain platforms that replicate the form and function of today’s tech stacks.
Just like today’s applications are built on file systems, databases, message queues, compute nodes, and so forth, many imagine that distributed applications will be built on combinations of services like Filecoin (for storage), IPDB (for database), Ethereum (for logic), and so forth.
Billions have been invested in this vision of the future, which is so utterly and catastrophically wrong, entire blockchain platforms will collapse in a heap of rubble. Or at least be reduced to ghosts of their former selves.
The reason is simple: any distributed application that has significant market value can easily attract enough participation to provide all of the services it needs for itself, on its own blockchain, without having to pass along numerous third-party costs to end-users.
The stack for new distributed applications is not going to be a bunch of blockchain platforms, but rather open source software. Developers will assemble their applications from open source components that provide different distributed services, such as queuing and storage.
The only distributed applications that will be built in Frankenstein fashion are those that cannot easily attract enough outside compute and storage—i.e. the set of unsuccessful distributed applications.
Myth 5: Blockchains Are Anti-Government
Lots of early vocal blockchain proponents were outspoken libertarians, or even anarchists, which led many to believe that blockchains are inherently anti-government.
Nothing could be further from the truth.
As tokens are not currency, but assets, proceeds from the sale of such assets can be taxed in the same manner as other assets. In addition, if blockchain technology is eventually regulated differently, the capability to provide verifiable audits on blockchain transactions could ensure perfect compliance with regulation, which is a standard no other industry can match.
Blockchain also provides the technology necessary to power many functions of government in a way that can be audited, and yet preserves a tunable degree of confidentiality in transactions. For example, a government powered by blockchain technology could issue currency and tax participants in an extraordinarily efficient, software-defined way, which leaves no room for tax evasion, has none of the overhead of the existing system, and gives citizens trust through tunable transparency.
Blockchains are not anti-government, they are just a new technological tool, one that actually has the potential to radically improve the efficiency and transparency of government, and usher in an era of software-defined (and therefore, software-enforced) regulation.
Myth 6: Blockchain Is An Append-Only Chain of Blocks
Technically speaking, the word blockchain stems from Bitcoin’s append-only chain of blocks, a data structure produced as part of the “mining” process that confirms transactions.
These days, blockchain technology has evolved past linear append-only chains. Sharded systems utilize Directed Acyclic Graphs (DAGs) of blocks, not linear chains, and there is no specific requirement for ever-growing, append-only chains (we’ll see other types of chains in the future that preserve the ability to audit but discard some historical information).
While some developers may not like the imprecision or terminology drift, blockchain now refers generically to a space of solutions, not any specific implementation techniques or data structures.
Myth 7: Blockchains Should Be Implemented in Go or C/C++
Given the high stakes involved, blockchain platforms should be written in languages that permit as much static verification as possible, and which allow straightforward and easily verifiable translation of math and formal protocol semantics into executable code.
Languages with strong, static type systems, which possess semantics amenable to formal verification, and which support functional programming (a mathematical style of developing software), are an excellent fit for implementation of blockchain technology.
Haskell in particular is proving to be a robust choice for implementing new blockchain technology, and other functional programming languages show promise as well (OCaml, Scala).
(Sorry, I couldn’t resist!)
Myth 8: Blockchain Is a Bubble
No one can perfectly predict the future of Bitcoin, Ethereum, and other major players in the blockchain space—precisely because the success of those platforms depends on people. Developers have to write code, pull requests must be accepted or rejected, and miners must adopt (or not adopt) upgrades.
While it’s impossible to perfectly predict what will become of the first-movers in the blockchain space, I feel confident in saying that the market cap of all blockchain technologies today is completely insignificant compared to what it will ultimately become.
In other words, while individual cryptocurrencies may (or may not) be in a bubble, the overall blockchain market is in its early days. All the real growth lies in the future, not the past.
Blockchain is a technology that will change the world forever.
The Future
In this post, I’ve outlined what I see as the major myths around blockchain technology that impede communication and funnel investment toward dead-end solutions.
What I haven’t done is talk about what blockchain technology is good for.
What are the killer applications for this type of technology, how will they emerge, and how will they be structured? What are the innovations and players we should be paying attention to, and the ones that are safe to ignore? How can blockchain technology provide a radically new way of monetizing some types of open source projects?
All these are excellent topics…for different posts. Let me know what you’re interested in hearing about in the comments below!
John A De Goes bio photo Twitter LinkedIn Github Everything You Know About the Blockchain Is Wrong was published on February 13, 2018 by John A De Goes.
37 leeny 9 hrs 24
interviewing.io is a platform where engineers practice technical interviewing anonymously. If things go well, they can unlock the ability to participate in real, still anonymous, interviews with top companies like Twitch, Lyft and more. Earlier this year, we launched an offering specifically for university students, with the intent of helping level the playing field right at the start of people’s careers. The sad truth is that with the state of college recruiting today, if you don’t attend one of very few top schools, your chances of interacting with companies on campus are slim. It’s not fair, and it sucks, but university recruiting is still dominated by career fairs. Companies pragmatically choose to visit the same few schools every year, and despite the career fair being one of the most antiquated, biased forms of recruiting that there is, the format persists, likely due to the fact that there doesn’t seem to be a better way to quickly connect with students at scale. So, despite the increasingly loud conversation about diversity, campus recruiting marches on, and companies keep doing the same thing expecting different results.
In a previous blog post, we explained why companies should stop courting students from the same five schools. Regardless of your opinion on how important that idea is (for altruistic reasons, perhaps), you may have been left skeptical about the value and practicality of broadening the college recruiting effort, and you probably concede that it’s rational to visit top schools, given limited resources — while society is often willing to agree that there are perfectly qualified students coming out of non-top colleges, they maintain that they’re relatively rare. We here to show you, with some nifty data from our university platform, that this not true.
To be fair, this isn’t the first time we’ve looked at whether where you went to school matters. In a previous post, we found that taking Udacity and Coursera programming classes mattered way more than where you went to school. And way back when, one of our founders figured out that where you went to school didn’t matter at all but that the number of typos and grammatical errors on your resume did. So, what’s different this time? The big, exciting thing is that these prior analyses were focused mostly on engineers who had been working for at least a few years already, making it possible to argue that a few years of work experience smoothes out any performance disparity that comes from having attended (or not attended a top school). In fact, the good people at Google found that while GPA didn’t really matter after a few years of work, it did matter for college students. So, we wanted to face this question head-on and look specifically at college juniors and seniors while they’re still in school. Even more pragmatically, we wanted to see if companies limiting their hiring efforts to just top schools means they’re going to get a higher caliber of candidate.
Before delving into the numbers, here’s a quick rundown of how our university platform works and the data we collect.
The setup
For students who want to practice on interviewing.io, the first step is a brief (~15-minute) coding assessment on Qualified to test basic programming competency. Students who pass this assessment, i.e. those who are ready to code while another human being breathes down their neck, get to start booking practice interviews.
When an interviewer and an interviewee match on our platform, they meet in a collaborative coding environment with voice, text chat, and a whiteboard and jump right into a technical question. Interview questions on the platform tend to fall into the category of what you’d encounter at a phone screen for a back-end software engineering role, and interviewers typically come from top companies like Google, Facebook, Dropbox, Airbnb, and more.
After every interview, interviewers rate interviewees on a few different dimensions, including technical ability. Technical ability gets rated on a scale of 1 to 4, where 1 is “poor” and 4 is “amazing!”. On our platform, a score of 3 or above has generally meant that the person was good enough to move forward. You can see what our feedback form looks like below:
new_interviewer_feedback_circled
On our platform, we’re fortunate to have thousands of students from all over the U.S., spanning over 200 universities. We thought this presented a unique opportunity to look at the relationship between school tier and interview performance for both juniors (interns) and seniors (new grads). To study this relationship, we first split schools into the following four tiers, based on rankings from U.S. News & World Report:
“Elite” schools (e.g. MIT, Stanford, Carnegie Mellon, UC-Berkeley) Top 15 schools (not including top tier, e.g. University of Wisconsin, Cornell, Columbia) Top 50 schools (not including top 15, e.g. Ohio State University, NYU, Arizona State University) The rest (e.g. Michigan State, Vanderbilt University, Northeastern University, UC-Santa Barbara) Then, we ran some statistical significance testing on interview scores vs. school tier to see if school tier mattered, for both interns (college juniors) and new grads (college seniors), comprising a set of roughly 1000 students.
Does school have anything to do with interview performance?
In the graphs below, you can see technical score distributions for interviews with students in each of the four school tiers (see legend). As you recall from above, each interview is scored on a scale of 1 to 4, where 1 is the worst and 4 is the best.
First, the college juniors…
And then, the seniors…
What’s pretty startling is that the shape of these distributions, for both juniors and seniors, is remarkably similar. Indeed, statistical significance testing revealed no difference between students of any tier when it came to interview performance.1 What this means is that top-tier students are achieving the same results as those in no-name schools. So the question becomes: if the students are comparable in skill, why are companies spending egregious amounts of money attracting only a subset of them?
Okay, so what are companies missing?
Besides missing out on great, cheaper-to-acquire future employees, companies are missing out on an opportunity to save time and money. Right now a ridiculous amount of money is being spent on university recruiting. We’ve previously cited the $18k price tag just for entry to the MIT career fair. In a study done by Lauren Rivera through the Harvard Business Review, she reveals that one firm budgeted nearly $1m just for social recruiting events on a single campus.
The higher price tag of these events also means it makes even less sense for smaller companies or startups to try and compete with high-profile, high-profit tech giants. Most of the top schools that are being heavily pursued already have enough recruiters vying for their students. Unwittingly, this pursuit seems to run contrary to most companies desires for high diversity and long-term sustainable growth.
Even when companies do believe talent is evenly distributed across school tiers, there are still reasons for why companies might recruit at top schools. There are other factors that help elevate certain schools in a recruiter’s mind. There are long-standing company-school relationships (for example, the number of alumni who work at the company currently). There are signaling effects too — companies get Silicon Valley bonus points by saying their eng team is comprised of a bunch of ex-Stanford, ex-MIT, ex- etc. etc. students.
So what can companies do?
As such, companies may never stop recruiting at top-tier schools entirely, but they ought to at least include schools outside of that very small circle in the search for future employees. The end result of the data is the same: for good engineers, school means a lot less than we think. The time and money that companies put in to compete for candidates within the same select few schools would be better spent creating opportunities that include everyone, as well as developing tools to vet students more fairly and efficiently.
As you saw above, we used a 15-minute coding assessment to cull our inbound student flow, and just a short challenge leveled the playing field between students from all walks of life. At the very least, we’d recommend employers do the same thing in their process. But, of course, we’d be remiss if we didn’t suggest one other thing.
At interviewing.io, we’ve proudly built a platform that grants the best-performing students access to top employers, no matter where they went to school or where they come from. Our university program, in particular, allows us to grant companies the privilege to reach an exponentially larger pool of students, for the same cost of attending one or two career fairs at top target schools. Want diverse, top talent without the chase? Sign up to be an employer on our university platform!
When Apple released ARKit, designing for augmented reality wasn’t defined as a field. But during autumn of 2017 in App Store appeared a lot of AR apps. Almost all of them used ARKit. In the beginning, user flow of augmented reality applications wasn’t clear. I wanted to define what interaction patterns are the most popular in ARKit apps. So, I made an extensive analysis of onboarding process for almost all apps available in App Store. Also, I have made UX reviews for free just to get experience and more insights.
I keep noticing that for the first time users was tough to understand how AR works. Also, UI wasn’t clear even for experienced in AR users. So learning curve for new users became even longer.
It looks like first demos build mostly by Unity and iOS developers. Obviously, they didn’t have the time or any other resources to hire experienced UX designer. Those times there wasn’t settled interactions patterns that could be followed.
Thus in my article “Onboarding in augmented reality mobile application” I tried to cover the essential steps and clarify on what to focus and what to avoid. I hope that it made at least single app more user-friendly.
After some time Apple and Google released own guidelines for AR apps. Although they are beneficial, they still left blind spots. I do recommend to have a look if you didn’t have a chance to learn them.
While the time I collected a lot of screenshots and started noticing some pattern in building user flows and using UI components in AR apps. So I decided to create UI kit that could be sophisticated enough for designers, but also can be useful for developers that can find suitable UI solutions to copy.
For easy searching I sorted all screens in next categories:
It’s first and the most confusing moment for first-time users. An app should gently (or not) let a user know that it needs some info about the environment.
When application scanned area and detected planes, a user can start placing 3d content. There are different interaction patterns, that can be suitable for different situations.
This step is about playing with augmented objects itself. In flat UI there are a lot of stable well-know patters, as swipe from the top to update page, and so on. In AR it’s like the wild west. Apple and Google tried to define some situations. But sometimes case require unique interactions. It’s critical to introduce them gently. Depending on the importance it can be small tip net to the button or almost full page popup.
Maybe the most well known AR app is Pokémon GO, what is location based. I hope that in future we will get much more interesting cases of using location-based augmented reality
For some applications, AR isn’t the main feature. Show object in AR is the only option. Apple suggested the icon for consistency. But I haven’t noticed it in any application that I tested. Also, lists and galleries are the most obvious ways to find a model.
All his time user spends in camera view. There can be various UI components that help interact with AR world.
To quickly explain some gestures and concepts to a user I created the collection of icons:
I hope that this collection will help to create more user-fieldy augmented reality mobile applications.
9 VovaKurbatov 4 hrs 1
news.ycombinator.com/item?id=16356659 When Apple released ARKit, designing for augmented reality wasn’t defined as a field. But during autumn of 2017 in App Store appeared a lot of AR apps. Almost all of them used ARKit.
In the beginning, user flow of augmented reality applications wasn’t clear. I wanted to define what interaction patterns are the most popular in ARKit apps. So, I made an extensive analysis of onboarding process for almost all apps available in App Store. Also, I have made UX reviews for free just to get experience and more insights.
I keep noticing that for the first time users was tough to understand how AR works. Also, UI wasn’t clear even for experienced in AR users. So learning curve for new users became even longer.
It looks like first demos build mostly by Unity and iOS developers. Obviously, they didn’t have the time or any other resources to hire experienced UX designer. Those times there wasn’t settled interactions patterns that could be followed.
Thus in my article “Onboarding in augmented reality mobile application” I tried to cover the essential steps and clarify on what to focus and what to avoid. I hope that it made at least single app more user-friendly.
After some time Apple and Google released own guidelines for AR apps. Although they are beneficial, they still left blind spots. I do recommend to have a look if you didn’t have a chance to learn them.
While the time I collected a lot of screenshots and started noticing some pattern in building user flows and using UI components in AR apps. So I decided to create UI kit that could be sophisticated enough for designers, but also can be useful for developers that can find suitable UI solutions to copy.
For easy searching I sorted all screens in next categories:
It’s first and the most confusing moment for first-time users. An app should gently (or not) let a user know that it needs some info about the environment.
When application scanned area and detected planes, a user can start placing 3d content. There are different interaction patterns, that can be suitable for different situations.
This step is about playing with augmented objects itself. In flat UI there are a lot of stable well-know patters, as swipe from the top to update page, and so on. In AR it’s like the wild west. Apple and Google tried to define some situations. But sometimes case require unique interactions. It’s critical to introduce them gently. Depending on the importance it can be small tip net to the button or almost full page popup.
Maybe the most well known AR app is Pokémon GO, what is location based. I hope that in future we will get much more interesting cases of using location-based augmented reality
For some applications, AR isn’t the main feature. Show object in AR is the only option. Apple suggested the icon for consistency. But I haven’t noticed it in any application that I tested. Also, lists and galleries are the most obvious ways to find a model.
All his time user spends in camera view. There can be various UI components that help interact with AR world.
To quickly explain some gestures and concepts to a user I created the collection of icons:
I hope that this collection will help to create more user-fieldy augmented reality mobile applications.
Check full preview on Behance
Test