🚀 Getting Started
💡 Spotlight helps you to identify critical data segments and model failure modes. It enables you to build and maintain reliable machine learning models by curating high-quality datasets.
Introduction
Spotlight is built on the idea that you can only truly understand unstructured datasets if you can interactively explore them. Its core principle is to identify and fix critical data segments by leveraging data enrichments (e.g. features, embeddings, uncertainties). Pre-defined templates for typical data curation workflows get you started quickly and connect your stack to the data-centric AI ecosystem.
We are building Spotlight for cross-functional teams that want to be in control of their data and data curation processes. Currently, Spotlight supports many use cases based on image, audio, video and time series data.
⏱️ Quickstart
Get started by installing Spotlight and loading your first dataset.
What you'll need
- Python version 3.7-3.10
Install Spotlight via pip
pip install renumics-spotlight
We recommend installing Spotlight and everything you need to work on your data in a separate virtual environment
Load a dataset and start exploring
- python
- CLI
import pandas as pd
from renumics import spotlight
df = pd.read_csv("https://renumics.com/data/mnist/mnist-tiny.csv")
spotlight.show(df, dtype={"image": spotlight.Image, "embedding": spotlight.Embedding})
pd.read_csv
loads a sample csv file as a pandas DataFrame.
spotlight.show
opens up spotlight in the browser with the pandas dataframe ready for you to explore. Thedtype
argument specifies custom column types for the browser viewer.
curl https://renumics.com/data/mnist/mnist-tiny.csv -o mnist-tiny.csv
spotlight mnist-tiny.csv --dtype image=Image --dtype embedding=Embedding
Load a Hugging Face dataset
import datasets
from renumics import spotlight
dataset = datasets.load_dataset("olivierdehaene/xkcd", split="train")
df = dataset.to_pandas()
spotlight.show(df, dtype={"image_url": spotlight.Image})
The
datasets
package can be installed via pip.
Disclaimer
Usage Tracking
We have added crash report and perfomance collection.
We do NOT collect user data other than an anonymized Machine Id obtained by py-machineid, and only log our own actions.
We do NOT collect folder names, dataset names, or row data of any kind only aggregate performance statistics like total time of a table_load, crash data, etc.
Collecting spotlight crashes will help us improve stability.
Too opt out of the crash report collection define an environment variable called SPOTLIGHT_OPT_OUT and set it to true.
e.G.
export SPOTLIGHT_OPT_OUT=true
🧭 Start by use case
You can adapt Spotlight to your data curation tasks. To get you started quickly, we are continuously developing pre-defined recipes for common workflows.
Get started quickly with our 📒 Playbook:
Tell us which data curation task is important for your work:
- Open an issue on Github
- Have a coffee talk with us
- Join our Discord