In case you missed it, Bumblebee has just been released and I’m wondering what you all might be using it for or what you might like to use it for
Does anyone know whether it would be useful for doing things like choosing (or suggesting) a category for a thread if fed just the title? If so that’s one thing I’d use it for
This is the lib I’ve been waiting for - very excited by the possibilities. Immediate use cases for me:
In my text library I’ll be adding:
- Language detection (replacing the current very old-school bayesian model)
- Parts-of-speech tagging
- Sentiment analysis
In my image library I’ve an experimental branch that does:
- Image classification (leveraging the examples from the Bumblebee repo)
- Image segmentation (Not implemented yet, I don’t think that’s built into Bumblebee yet so perhaps I can work out how to make a contribution)
I’m really excited to see this as well Kip! I guess we’ll be able to use it to help detect prohibited images? Such as porn, self-harm, etc?
Would it also be possible to use it to invisibly watermark images, so that the source can always be checked or verified? Perhaps via something like markpainting? (The use case I had in mind is invisibly watermarking user uploaded images, such as those uploaded as profile pics or to sites like Instagram. The uploaded image would get an invisible watermark so that if someone screenshot it or stole it and then uploaded it to a ‘fake profile’ we could identify the true source of the image, eg, your profile photo here could contain an invisible watermark of
I believe so - just need a good classification model for that kind of material. Not sure the hugging face repository has such?
Would it also be possible to use it to invisibly watermark images, so that the source can always be checked or verified? Perhaps via something like markpainting ?
Interesting idea, I’ll do some research!
Looks like they’ve got some Kip:
Porn models: Models - Hugging Face
Self-harm models: Models - Hugging Face
Awesome! I think that’d be a killer feature for your library
Several years ago, I served as a mentor for a friend’s DO-IT project. The result was a set of Ruby scripts which implemented a desktop prototype of The Muddy Map Explorer. As the name hints, Muddy provided a text-based interface (similar to that found in a MUD).
However, instead of presenting a fictional location, Muddy would let someone (e.g., a blind person) explore a given neighborhood. Our notion was that this would help the person to create a mental model while sitting safely at a computer. Later, armed with this knowledge, they might be better equipped to navigate the neighborhood in person.
Although the prototype system “worked”, it was far from perfect. A large part of the problem stemmed from our use of OpenStreetMap data. Because OSM encodes streets as annotated sequences of locations, it provides very spotty coverage of details that are only of interest to pedestrians (e.g., benches, driveways, roadway lanes and widths).
In addition, installing and maintaining the prototype would not be easy for a naive user (blind or not). So, I’ve often wondered if I could translate Muddy to Elixir and set it up as a Phoenix/LiveView-based web site. This would resolve the installation and maintenance issues, as well as making Muddy truly a multi-user system. However, the source data issues would still be present.
With the advent of Bumblebee, it might be possible to analyze satellite and other imagery, creating much richer subsets of the OSM data. And, if the added data came from unrestricted sources (such as USGS), the OSM folks might even be willing to import it into their repository. Alas, this is all Science Fiction, but I still find it interesting to think about…
I am learning Elixir & Phoenix for Indie Hacking.
Recently Peter Levels created few more startup, where he is using pre-trained models to make money!!
I believe everyone in this community should have their own startups or Micro SaaS, which will inspire rest to follow suit.
That’s the best way to showcase the power of the platform and win hearts.
A common goal will lead to creation of lots of tooling to achieve said goal.
Hope to see an IndieHacker tag for people who are creating their own stuff.
Let me do all the learning and then we can have a 12 startups in 12 month marathon, instead of Advent of Code kerfuffle.
Checkout some of the cool applications:
I’m playing with Bumblebee’s text generation to create a collaborative story builder:
The idea is to have an editor that locks every x seconds to ask its audience how the story should proceed, with options provided by Bumblebee. Let’s see how that will grow.