← Back to Home
Search Query Guide
Learn how to search the Monta Knowledgebase effectively using the query tool.
Search Modes
The knowledgebase supports three search modes to match different use cases:
Default Mode (Hybrid Search)
Best for: Most searches, natural language questions
Combines keyword matching with semantic understanding to find the most relevant results.
Example:
Query: "How do I set up a charging station?"
Mode: default
This finds documents that:
- Contain the exact words in your query
- Are semantically similar to your question
- Have been ranked by relevance and quality
BM25 Mode (Keyword Search)
Best for: Exact phrases, technical terms, specific keywords
Finds documents containing your exact search terms using BM25 ranking.
Example:
Query: "OCPP 2.0.1 configuration"
Mode: bm25
Use when you know the exact terminology or need precise matches.
Vector Mode (Semantic Search)
Best for: Conceptual queries, finding related content
Finds documents with similar meaning, even if they don't use your exact words.
Example:
Query: "problems with billing customers"
Mode: vector
Might find documents about "invoice errors", "payment issues", or "charging session costs" even without those exact terms.
How Vector Search Works
Vector search uses embeddings and chunking to understand the semantic meaning of your query and find relevant content.
The Process:
Document Chunking (at index time)
- Large documents (>3KB) are split into chunks of 3000 characters
- Each chunk overlaps by 300 characters to maintain context continuity
- Chunks break at sentence boundaries when possible for readability
- Each chunk maintains a reference to its parent document
Embedding Generation (at index time)
- Every chunk is converted into a 1536-dimensional vector using OpenAI's
text-embedding-3-small model
- This vector captures the semantic meaning of the text
- Full documents are also stored with their complete content alongside embeddings
Query Embedding (at search time)
- Your search query is converted into the same 1536-dimensional vector space
- This allows mathematical comparison of semantic similarity
Chunk Search (RAG - Retrieval Augmented Generation)
- The system searches through all document chunks using k-Nearest Neighbors (kNN)
- Finds chunks with vectors most similar to your query vector (cosine similarity)
- Multiple chunks from the same document may match your query
Document Retrieval
- Matching chunks are grouped by their parent document ID
- The best matching chunk score is used for each document
- Full documents (not chunks) are returned with their complete content
- The best matching chunk text is shown as a preview highlight
Why Chunking?
Chunking improves search accuracy because:
- Large documents may contain many topics; chunking isolates relevant sections
- Embeddings capture meaning better for focused text passages
- You still get the full document context when needed
Example Flow:
User Query: "How to troubleshoot charging errors"
↓
Query Embedding: [0.123, 0.456, 0.789, ...] (1536 numbers)
↓
Search Chunks Index (3000+ chunks from all documents)
↓
Best Matching Chunks:
- Chunk #5 from "Charging Station Guide" (score: 0.95)
- Chunk #12 from "Error Code Reference" (score: 0.87)
- Chunk #3 from "Charging Station Guide" (score: 0.82)
↓
Group by Document:
- "Charging Station Guide": best score 0.95
- "Error Code Reference": best score 0.87
↓
Fetch Full Documents (entire content, not just chunks)
↓
Return: Complete "Charging Station Guide" + "Error Code Reference" documents
This approach combines the precision of chunk-level semantic search with the utility of returning full documents for complete context.
Source Filtering
Official Sources Filter
When you need only official Monta content, use the official=true parameter to filter results to company-authored sources.
What Are Official Sources?
The official filter restricts results to content created by Monta:
- Website: Official Monta website content
- Help Center (helpjuice): Primary support documentation
- Product Updates (beamer): Official release notes and announcements
- RFx Engine: Verified RFP/RFI/RFQ responses
- Notion: Internal documentation and wiki
- Sales Calls (gong): Curated sales call Q&A from pre/post-sales calls
- Monta Collection: Curated internal documents from the "monta" collection
When to Use Official Filtering
Use official=true when:
- You need official Monta company information or policies
- Building customer-facing features that require accurate information
- Researching product specifications or technical requirements
- Need company-approved answers only
Don't use official filtering when:
- You want to include industry standards and best practices
- Searching for troubleshooting tips from support tickets
- Looking for edge cases or known issues from ticket history
Example Usage
MCP Tool:
{
"query": "How does load balancing work?",
"official": true,
"maxResults": 10
}
REST API:
# GET request
curl "https://knowledgebase.monta.app/api/query?query=load%20balancing&official=true"
# POST request
curl -X POST https://knowledgebase.monta.app/api/query \
-H "Content-Type: application/json" \
-d '{
"query": "load balancing",
"official": true,
"maxResults": 10
}'
What Gets Filtered Out
With official=true, these sources are excluded:
- Support Tickets (zendesk): User-generated content and troubleshooting
- Support Calls (aircall): Transcribed support call Q&A (user-generated)
- Industry Documents: Standards, specifications, and third-party documentation
This ensures you get only Monta-authored content.
Note: Gong sales calls ARE included in official sources because they contain curated pre-sales and post-sales Q&A, unlike Aircall which contains user-generated support call transcripts.
Factual Sources Filter
When you need reliable, documented information (including industry standards), use the factual=true parameter to filter results to trusted sources.
What Are Factual Sources?
The factual filter restricts results to:
- Website: Official Monta website content
- Help Center (helpjuice): Primary support documentation
- Product Updates (beamer): Official release notes and announcements
- RFx Engine: Verified RFP/RFI/RFQ responses
- Notion: Internal documentation and wiki
- Sales Calls (gong): Curated sales call Q&A from pre/post-sales calls
- All Documents: Industry standards, specifications, and documentation
When to Use Factual Filtering
Use factual=true when:
- You need documented information from reliable sources
- Want to include industry standards and best practices
- Researching technical specifications (both Monta and industry)
- Need verified information but don't need to restrict to Monta-only content
Don't use factual filtering when:
- You want to include community solutions from support tickets
- Looking for troubleshooting tips based on user experiences
- Searching for edge cases or known issues from ticket history
Example Usage
MCP Tool:
{
"query": "OCPP 2.0.1 specifications",
"factual": true,
"maxResults": 10
}
REST API:
# GET request
curl "https://knowledgebase.monta.app/api/query?query=OCPP%20specifications&factual=true"
# POST request
curl -X POST https://knowledgebase.monta.app/api/query \
-H "Content-Type: application/json" \
-d '{
"query": "OCPP specifications",
"factual": true,
"maxResults": 10
}'
What Gets Filtered Out
With factual=true, these sources are excluded:
- Support Tickets (zendesk): Community-contributed solutions and troubleshooting
- Support Calls (aircall): Transcribed support call Q&A (user-generated)
This ensures you get documented, reliable information from both Monta and industry sources.
Note: Gong sales calls ARE included in factual sources because they contain curated pre-sales and post-sales Q&A with confidence scoring.
How Relevance Ranking Works
Results are ranked using multiple factors:
1. Where the Match Appears
Matches in different locations have different importance:
- Keywords: Highest priority - curated important terms
- Title: Very high priority - document titles are key indicators
- Description: Medium priority - document summaries
- Content: Standard priority - body text
Example: Searching for "solar panels" will rank a document titled "Solar Panel Installation Guide" higher than one that mentions solar panels only in the content.
2. Document Source Quality
Different content sources have different weights:
- Help Center (helpjuice): Primary documentation - highest priority
- Website: Official content - high priority
- Product Updates (beamer): Release notes - standard priority
- Support Tickets (zendesk): Community solutions - reduced priority to prevent overwhelming results
- Support Calls (aircall): Transcribed call Q&A - similar priority to zendesk
- Sales Calls (gong): Pre/post-sales Q&A - high priority for sales-related queries
3. Content Freshness
Newer content ranks higher for time-sensitive sources:
- Product Updates: Recent updates are prioritized (features may have changed)
- Support Tickets: Recent tickets reflect current product behavior
Older content remains available but ranks lower when similar newer content exists.
4. Content Quality
Documents are scored based on metadata:
- User satisfaction ratings
- Popularity (how often the content helps others)
- Completeness and clarity
Via Interactive Tester
Visit /query for an interactive search interface where you can:
- Try different search modes
- Filter by content sources
- Adjust result limits
- See search performance metrics