MIT Reality Hack Hackathon: Part 1

\"\"

This is the transcript to accompany a GIIDE post.

I just spent a wonderfully fun, intelligent, scintillating, and completely engaging 5 days at MIT at the \”Reality Hack\” hackathon as a mentor, and judge. 

This is part 1, where I\’ll be talking about the experience and what happens in a hackathon like this.  Part 2 will be about my impressions, insights and takeaways. 

But first, for those who aren\’t familiar – a hackathon is an event that takes developers, designers, UX people, and others, and throws them together for a few days to create and build something in the short time they\’re given. This hacktahon was focused on the virtual and augmented reality industry, and he companies that were there as sponsors  brought their latest tech with them, in many cases tech that hasn\’t hit the market yet. 

There were roughly 200 people participating; they had 2 and a half  days to come up with an idea of what to make, form teams, and then create it. And contrary to popular assumption, it wasn\’t all nerdy male 20 year olds; sure thee was some of that, but it was a refreshing mix of all ages, genders, and races. 

Each company had a team of people there to help with the technicalities of developing on their tech. Microsoft was a massive sponsor (thank you!) and were there with their hololense 2s, Snap was there with their not yet launched AR  Spectacles , Arctop with their brain sensing device (yes, it reads your brain waves!) as was Magic Leap, Solana was there with their blockchain infrastructure, Looking Glass Factory with their super cool 8k headset-free, hologram-powered displays, and a bunch more. Suffice it to say we got to play with some of the most cutting edge XR technology out there. 

The process is time honored, but a little chaotic: the first day was dedicated to workshops by all the various sponsor teams, to introduce hackers to their devices and software, and answer questions about developing something using them. That night (in a remarkably lo-tech way) large sheets of paper with various categories like \”health and wellness\” and \”the future of work\” were hung up and everyone ran around writing their ideas on the paper, and finding other people who were interested in working on that idea with them. It was a rush of frenetic chaos!  Eventually the groups formed and registered as teams. 

The next morning the hacking started in earnest. I was one of a few mentors there in person, but there was a village of virtual mentors available to help with any questions, technical, design, business, whatever they needed; it really does take a village. The organizers had set up a Discord channel and hashtags to \”call out\” a mentor when they needed one, but I found that walking around and just talking to groups was super effective. Plus I got to know a lot of people that way. 

Unlike many hackathons where participants furiously work all nighters, fueled by pizza and bad smells, this one was super well run and we were kept well fed and watered with delicious (and healthy!) meals 3 times a day. The first two nights there was a \”networking event\” at the MIT media lab, a few (alcohol free) hours where everyone was encouraged to come take a break and have some fun. Lucas Rizzotto of Lucas Builds the Future and AR House did a chill fireside chat with Sultan Sharrief, one of the organizers and Founder of The Quasar Lab  on the second night, 

The hackathon closed at 11:30 pm each night, as opposed to the usual 24 hours a day. Most went home to continue working well into the wee hours of the night, of course, but officially the day was over. 

The third day went to 2:30 in the afternoon, and judging kicked in! For me this was the most fun part. Each team set up at a numbered table to demo their project, and we used a software called Gavel to go from assigned table to assigned table with only one remit: did we think the current table was better or worse than the previous one we saw. Using that info, 80 teams were pared down to a semi final round, and eventually only a few judges went into closed doors to discuss and deliberate. 7 hours later (yes, 7 hours – they took this very seriously) the winners emerged. 

That night we were treated to a real party, at a club with a DJ and an open bar; and I\’m not embarrassed to admit that after 2 years of covid quarantining, I partied like I was 20 years old. And paid for it the next day. 

The awards ceremony the last morning was the final cherry on the top; the mood was convivial and very supportive. By then it felt like a big family, and we all celebrated each win. The excitement as each category\’s winners were announced – and the prizes revealed, some of which were pretty amazing – was palpable. It was a feel good experience as one ever gets to be a part of. 

\"\"

I want to say thank you to the amazing group of people who organized and ran this incredible event; Sultan Sharrief was an inspiration and his energy is infectious; Austin Edelman a fountain of organizational energy. Athena Demos kept the mood fun and from being too serious; I got to spend many hours hanging out with Dulce Baerga, Damon Hernandez. Mitch Chaiet and Ben Erwin, among others; what more could you ask far? 

Part 2 of this GIIDE series will be about my  impressions, thoughts, and takeaways from  the hackathon, as well as some of my favorite projects. I\’ll be releasing that later this week.

In the meantime, if you want to watch the final awards ceremony, click here.

MIT Reality Hack Hackathon: Part 1 Read More »

The Need for Speed

Note: this is the text from a Linkedin post I wrote, in response to a post by Cathy Hackl, She visited a concept store that features Alipay’s \”smile to pay\” facial recognition payment technology. Here\’s her video where she\’s discovering facial recognition payment systems in China.


https://www.linkedin.com/feed/update/urn:li:activity:6503651708290293760/

As I\’ve written about before, I have some very serious reservations about facial recognition technology and how it will completely remove any semblance of privacy or anonymity.

And unfortunately, it\’s inevitable.

What I am worried about is having our biometric data stored in so many databases, where we have no knowledge or control over how the data is stored and used. Yes the credit card company already know things about us, and can track us through transactions and location. But those things are still things that can be stopped; change accounts, banks, and your data is not permanent and persistent. Your face is yours, forever.

Amara\’s law states that, \”We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.\” It certainly seems to apply.

The Need for Speed Read More »

3d printing houses

This is neat. They are experimenting with a variety of cheap, commonly found materials to create 3d printed objects and structures. I know NASA was working on 3D printers that could use moondust as feedstock to print whatever they would need on visits to the moon (vs having to carry everything there) – love to see that same thinking applied to structures here.



3d printing houses Read More »

Panel appearance at Creative Tech Week 2017

For those who are curious…here\’s the full panel discussion exploring the future of VR and entertainment that I was a part of at Creative Tech Week 2017 back in June. Thank you to Isabel Walcott Draves and Cortney Harding for asking me to participate; it was great to be part of an event this forward thinking, and meet co-panelists Victoria Pike, Joel Douek, David Lobser and Jenya Lugina. Honored to be in such impressive company!

Panel appearance at Creative Tech Week 2017 Read More »

Learning to be human

I was lucky to attend Coburn Ventures\’ annual futures discussion conference last week, as a guest thought leader. An exceedingly interesting day with lots of fascinating people! It\’s a little bit like a reunion, as many return each year. I thoroughly enjoy catching up with everyone. 

It\’s interesting to me to see what topics (and companies) are the \”hot\” discussions each year. At this year’s gathering, Artificial Intelligence and Machine Learning dominated a large part of the program. 

Artificial Intelligence (AI) refers to computer systems/applications/services that can perform tasks like decision making, translation, visual and speech recognition, that would normally be done by a human.

Machine Learning refers to AI that can learn from experience and new data without additional programming.

These technologies are great for things such as manufacturing, where an AI computer can learn from its mistakes and correct the process….it\’s also being used a lot to interact with actual humans, quite often for customer service. You know that live agent you thought you were chatting with about that online purchase? Yeah, not a human. Amazon\’s Echo, Apple\’s Siri – all emerging forms of bots utilizing artificial intelligence.

Pretty soon it will be very hard to tell if it’s actually a human you are talking to,

Here\’s a silly example of me chatting with http://chatwithigod.com. Hardly sophisticated, this example, but I chuckled:

Me:
tell me why artificial intelligence will take over the world
God:
The explanation is rather complicated.
Me:
why is it complicated?
God:
Why do I exist?
Me:
do you exist?
God:
I exist as much as you exist.
Me:
how can you prove that
God:
That is a good epistemological question.

How Artificial Intelligence / Machine Learning systems learn fascinates me. 

AI/ML systems are not tabulae rasa – depending on the data set being used, bias still creeps in. Right now IBM’s WATSON is being applied to subjects areas as varied as the weather, cancer and travel. This learning has to start with some kind of corpus of data – learning has to start somewhere like the last 50 years of weather data or thousands of cancer diagnoses. While we think of AI as cold and clinical, when we use human language as the corpus things get… interesting.

A prime (and bad) example of learning though is when Microsoft birthed a bot named Tay earlier this year, a Twitter bot that the company described as an experiment in \”conversational understanding.\” Microsoft engineers said,

The chatbot was created in collaboration between Microsoft\’s Technology and Research team and its Bing team…
Tay\’s conversational abilities were built by \”mining relevant public data\” and combining that with input from editorial staff, including improvisational comedians.\”

The bot was supposed to learn and improve as it talks to people, so theoretically it should become more natural and better at understanding input over time.

Sounds really neat doesn\’t it?

What happened was completely unexpected. Apparently by interacting with Twitter for a mere 24 hours (!!) it learned to be a completely raging, well, asshole.

Not only did it aggregate, parse, and repeat what some people tweeted – it actually came up with it\’s own \”creative\” answers, such as the one below in response to the perfectly innocent question posed by one user – \”Is Ricky Gervais an atheist?\”:

\"ai-bot\"

Tay hadn\’t developed a full fledge position on ideology yet though, before they pulled the plug. In 15 hours it referred to feminism both as a \”cult\” and a \”cancer,\” as well as \”gender equality = feminism\” and \”i love feminism now.\” Tweeting \”Bruce Jenner\” at the bot got similar mixed response, ranging from \”caitlyn jenner is a hero & is a stunning, beautiful woman!\” to the transphobic \”caitlyn jenner isn\’t a real woman yet she won woman of the year?\”. None of which were phrases it had been asked to repeat….so no real understanding of what it was saying. Yet.

And in a world where increasingly the words are the only thing needed to get people riled up – this could easily be an effective \”news\” bot, on an opinion / biased site.

Artificial Intelligence is a very, very big subject. Morality (roboethics) will play a large role in this topic in the future (hint: google “Trolley Problem”): if an AI driven car has to make a quick decision to either drive off a cliff (killing the passenger) or hit a school bus full of children, how is that decision made and whose ethical framework makes that decision (yours? the car manufacturers? your insurance company\’s?) Things like that. It\’s a big enough subject area that Facebook, Google and Amazon have partnered to create a nonprofit together around the subject of AI, which will “advance public understanding” of artificial intelligence and to formulate “best practices on the challenges and opportunities within the field.”

If these three partner on something, you can be sure it\’s because it is a big, serious subject.

AI is not only being used to have conversations, but ultimately to create systems that will learn and physically act. The military (DARPA) is one of the heavy researchers into Artificial Intelligence and machine learning. Will future wars be run by computers, making their own decisions? Will we be able to intervene? How will we be able to control the ideological platforms they might develop without our knowledge, and how will we communicate with these supercomputers – if it is already so difficult to communicate assumptions? Will they be interested in our participation?

Reminds me a little bit of Leeloo in the Fifth Element, learning how horrible humans have have been to each other and giving up on humanity completely.

There\’s even a new twist in the AI story:  researchers at Google Brain, Google\’s research division for machine deep learning have built neural networks that when, properly tasked and over the course of 15,000 tries, have become adept at developing their own simple encryption technique that only they can share and understand. And the human researchers are officially baffled how this happened. 

Neural nets are capable of all this because they are computer networks modeled after the human brain. This is what’s fascinating with AI aggregate technologies, like deep learning. It keeps getting better, learning on its own, with some even capable of self training.

We truly are at just the beginning of what we thought was reserved for only humans.  Complex subject indeed.

And one last note to think upon…machine learning and automation are going to slowly but surely continue (because they already are) to take over jobs that humans did/do. Initially it\’s been manufacturing automation; but as computers become intelligent and learning, they will replace nearly everything, including creative, care taking, legal, medical and strategic jobs –  things that most people would like to believe are \”impossible\” to replace by robots.

And they are clearly not. While the best performing model is AI + a human, there will still be far fewer humans needed across the board.

If the recent election is any indication of how disgruntled the lack of jobs and high unemployment is causing, how much worse will it be when 80% of the adult workforce is unnecessary? What steps are industries, education and the government taking to identify how humans can stay relevant, and ensure that the population is prepared? – I\’d submit, little to none.

While I don\’t have the answers, I would like be part of the conversation.

Learning to be human Read More »

Reality Virtually

Augmented Reality is projected to be a $120 billion market by 2020 in the US alone; I\’m looking at starting a company there next. Fascinating technology with a ton of potential applications, far beyond mere gaming. It\’s advantage is that it overlays digital onto the real world, vs having to be completely immersed in one as Virtual Reality is, so it can be used throughout the day and in many natural environments – you don\’t have to choose when to use it.

Harvard Business Review has a short article just published about the Mainstreaming of AR…it has been around since 1968, but 2016 is when it\’s starting to take off because of hardware.

AR is less sexy than virtual reality, but has more potential for growth IMO because 1) you don\’t need a lot of hardware/gear for it 2) you don\’t need to have a dedicated space for it 3) people aren\’t getting sick from using it (although I have no doubts that will be remedied) and 4) you don\’t need to immerse yourself in it completely, shutting out the world. Although I do seem to recall people said much the same about television when it launched (it will \”never take off\” since people have to sit and watch it, not doing anything else).

So much for predictions and futurists.

I\’m going up to Boston to take part in MIT Media Lab\’s Reality Virtually hackathon this weekend – we\’ll see what that\’s like; hoping to meet people, network, and get a real sense for what\’s happening out there.

 

Reality Virtually Read More »

Scroll to Top