Jonathan first discovered his love of all things data while studying Computer Science and Physics at UC Berkeley. In a former life, he worked for Alpine Data Labs developing distributed machine learning algorithms for predictive analytics on Hadoop. Currently, he is redefining data science education as the co-founder and CTO of Zipfian Academy.
Jonathan has always had a passion for sharing the things he has learned in the most creative ways he can. He has been a mentor at Dev Bootcamp, taught classes at General Assembly, and was an instructor at Hack Reactor. At Zipfian Academy, he gets to combine his two favorite things: humans and code.
Room: N-124 | Time: 4:30pm - 4:50
With the ever increasing amount of data produced by IoT devices, rich experiences are possible that before were only the realm of science fiction. From data driven applications such as Google Now, to smart home automation devices like Nest, we are seeing companies leverage machine learning to deliver personalized products. But with the demand of users for seamless experiences engineers must be able to deliver results seemingly effortlessly and in realtime.
In this talk I will cover different approaches to deploying a production machine learning application on massive amounts of data to deliver insight in near-realtime. By leveraging distributed architectures as well as streaming and online algorithms you can intelligently break up a larger problem to create a responsive data product. But with any approach (streaming or batch) there are always tradeoffs in how you design your system and knowing each's limits is critical.