Friday Hacks #257, April 12: On the New XOR Problem and Becoming Data Driven

Posted on by Wong Kok Rui

Date/Time: Friday, April 12 at 7:00pm SGT
Venue: COM3-01-20 Seminar Room 11
Sign-up Link: Sign-up here

Food 🍕 and Drinks 🧋 will be served!

Friday Hacks #257 Poster 1
Friday Hacks #257 Poster 2

1) How to *Actually* Become Data Driven

People often say things like “become data driven” without explaining what that means or how to do it. On the other hand, sometimes well-meaning commentators will say things like “oh, being data-driven is bad, it’s better to be data-informed instead”. In 2022, I spent six months working with early Amazon exec Colin Bryar to explicate the principles behind the Amazon Weekly Business Review (and mostly to learn how Amazon uses data to run its business.) I then spent 9 months attempting to put these ideas to practice. This talk will cover everything you need to know about ACTUALLY becoming data driven, and from first principles. No bullshit, no hand-wavy sketches, no fancy technology, no buzzwords. Just a small set of effective ideas, that — for what it’s worth, turn out to be the same ideas that Amazon, Koch Industries, and Toyota used to become data driven, back in their day.

Speaker Profile

Cedric Chin helped create the NUS Hackers, and, upon graduation, vanished into Vietnam to run the software engineering office for a Singaporean company (and also to complete his tuition grant bond). He helped that company pivot from outsourcing to product and bootstrapped it to single-digit million dollars in annual revenue over the course of three years. He then left, started his own company, helped another company double their annual recurring revenue, and then spent four months training Judo full time just for the heck of it. He now runs his own thing and writes at Commoncog.

2) The New XOR Problem

Minsky and Papert’s Perceptrons in 1969 mentions the limitations of the perceptron, namely the XOR problem, and this has often been cited as a contributing factor for the first AI winter. These problems have largely been overcome with MLPs / deeper networks, but recent focus on the Transformer architecture, while successful in many ways, introduce new limitations in what can be expressed by the Transformer architecture. These limitations echo the XOR problem in some interesting ways. I will briefly cover some of the present media coverage, history of AI research, then I will try to motivate why these limitations are important, and then give a quick overview of what the limitations entail, with a focus on an XOR-like problem. Finally, I will give some examples of what pitfalls to avoid if you’re trying to use these models in your project.

Speaker Profile

Shawn Tan is a graduating PhD student from the Montreal Institute of Learning Algorithms (Mila), he’s back in Singapore for a while, and will join the MIT-IBM Watson AI Lab as a Research Scientist.

He’s been staring at neural networks for the past 10 years.

👋 See you there!

comments powered by Disqus