Overview
Neode supports semantic search using vector embeddings. Instead of matching exact keywords, semantic search finds content based on meaning and context.How It Works
- Embedding Generation: Text is converted into a vector (array of numbers) that captures its meaning
- Similarity Search: Vectors are compared to find semantically similar content
- Results: Returns triples and entities with similar meaning to your query
Generating Embeddings
Convert text to embeddings for semantic search:Single Embedding
Batch Embeddings
Generate multiple embeddings at once:Semantic Search in the UI
The Neode web interface at neode.ai/explore uses semantic search:- Enter a natural language query
- The system finds entities and triples with similar meaning
- Results are ranked by semantic similarity
Building Custom Semantic Search
Step 1: Generate Query Embedding
Step 2: Compare with Stored Embeddings
Entities and triples in Neode have pre-computed embeddings. You can:- Use the explore page: neode.ai/explore handles this automatically
- Build custom search: Store embeddings in a vector database and query with cosine similarity
Step 3: Rank Results
Sort results by similarity score (higher = more similar):Embedding Model
Neode uses a 256-dimension embedding model optimized for knowledge graph content:| Property | Value |
|---|---|
| Dimensions | 256 |
| Model | OpenAI text-embedding |
| Normalization | L2 normalized |
Use Cases
Finding Related Entities
Search for entities related to a concept:Question Answering
Find triples that answer a question:Concept Exploration
Discover related concepts:Duplicate Detection
Find potentially duplicate entities:Best Practices
Embedding Quality
For best results, embed meaningful text:Caching
Embeddings are deterministic. Cache them to avoid repeated API calls:Batch for Efficiency
When embedding multiple items, use batch mode:API Reference
See the complete Embeddings API documentation.