
- #CRAZYTALK PIPELINE DEMOS HOW TO#
- #CRAZYTALK PIPELINE DEMOS SOFTWARE#
- #CRAZYTALK PIPELINE DEMOS CODE#
Beidi, Tri, and Atri and team describe pixelated butterfly for hardware efficient sparsity.Sabri and team discuss discovering systematic errors with cross-modal embeddings.
#CRAZYTALK PIPELINE DEMOS CODE#
#CRAZYTALK PIPELINE DEMOS HOW TO#
Mayee and Dan discuss how to improve robustness of contrastive learning in Thanos.Karan and Albert use S4 to generate raw audio in Sashimi.In ICML22, we describe audio generation with state spaces, fast learning with sparse matrices, making contrastive learning robust, and a new method for robustness.In UAI22, Dan Mayee and friends talk about how to combine weak supervision and foundation models.Combining with weak supervision techniques, which is amazing!.We continue to look at improving long sequences with state space.It's also great for sparsity! MLPerf Story on Tri! and Best Paper at Hardware Efficient Training. Importantly, it enables the first transformer to solve Path-X variants in long range arena.

This implementation is the fastest we're aware of (including highly optimized variants from vendors). Flash Attention looks at attention through the lens of IO-Aware algorithms.We're also really interested in improving the foundations of foundation models.Can foundation models offer perfect secrecy? How does it compare to prior approaches like federated learning? github.Wrangle your data in which we show few shot models -not trained for any of these tasks -get state-of-the-art performance on cleaning, integration, and imputation benchmarks.Domino debugging your data by discovering systematic errors with cross-modal embeddings.
#CRAZYTALK PIPELINE DEMOS SOFTWARE#
We've been looking at how foundation models can help us build software systems, most recently:.An explainer of a simplified version of S4 ( S4 Explainer Blog).A panel on Service, Science and Startups changing research.SIGMOD keynote on Data-centric AI, Declarative ML, and Foundation Models in data slides ( YouTube).For the sake of transparency, I do my best to list companies I advise or invest in here, many of which involve former members of the lab. With students and collaborators, I've been fortunate enough to cofound projects including SambaNova, Snorkel, and Factory along with two companies that are now part of Apple, Lattice (DeepDive) and Inductiv (HoloClean). To that end, over a dozen members of our group have started their own professorships. While we're very proud of our research ideas and their impact, the lab's real goal is to help students become professors, entrepreneurs, and researchers. Maybe this sounds like Silicon valley crazy talk, but crazily enough, you've probably used a system that uses these ideas from our lab in the last few hours due to amazing students and collaborations with Google ads, YouTube, Apple, and more. Our work is inspired by the observation that data is central to these systems (crudely, "AI is driven by data-not code"), and so data management principles (re-imagined) play a starring role in our work. Snorkel, Overton ( YouTube), or SambaNova.

I'm particularly excited when we can blend ML and systems, e.g.

