import { CodeGroup } from '@/app/components/develop/code.tsx' import { Row, Col, Properties, Property, Heading, SubProperty, PropertyInstruction, Paragraph } from '@/app/components/develop/md.tsx' # Knowledge API
### Authentication Service API of Dify authenticates using an `API-Key`. It is suggested that developers store the `API-Key` in the backend instead of sharing or storing it in the client side to avoid the leakage of the `API-Key`, which may lead to property loss. All API requests should include your `API-Key` in the **`Authorization`** HTTP Header, as shown below: ```javascript Authorization: Bearer {API_KEY} ```

This API is based on an existing knowledge and creates a new document through text based on this knowledge. ### Params Knowledge ID ### Request Body Document name Document content Type of document (optional): - book Book - web_page Web page - paper Academic paper/article - social_media_post Social media post - wikipedia_entry Wikipedia entry - personal_document Personal document - business_document Business document - im_chat_log Chat log - synced_from_notion Notion document - synced_from_github GitHub document - others Other document types Document metadata (required if doc_type is provided). Fields vary by doc_type: For book: - title Book title - language Book language - author Book author - publisher Publisher name - publication_date Publication date - isbn ISBN number - category Book category For web_page: - title Page title - url Page URL - language Page language - publish_date Publish date - author/publisher Author or publisher - topic/keywords Topic or keywords - description Page description Please check [api/services/dataset_service.py](https://github.com/langgenius/dify/blob/main/api/services/dataset_service.py#L475) for more details on the fields required for each doc_type. For doc_type "others", any valid JSON object is accepted Index mode - high_quality High quality: embedding using embedding model, built as vector database index - economy Economy: Build using inverted index of keyword table index Format of indexed content - text_model Text documents are directly embedded; `economy` mode defaults to using this form - hierarchical_model Parent-child mode - qa_model Q&A Mode: Generates Q&A pairs for segmented documents and then embeds the questions In Q&A mode, specify the language of the document, for example: English, Chinese Processing rules - mode (string) Cleaning, segmentation mode, automatic / custom - rules (object) Custom rules (in automatic mode, this field is empty) - pre_processing_rules (array[object]) Preprocessing rules - id (string) Unique identifier for the preprocessing rule - enumerate - remove_extra_spaces Replace consecutive spaces, newlines, tabs - remove_urls_emails Delete URL, email address - enabled (bool) Whether to select this rule or not. If no document ID is passed in, it represents the default value. - segmentation (object) Segmentation rules - separator Custom segment identifier, currently only allows one delimiter to be set. Default is \n - max_tokens Maximum length (token) defaults to 1000 - parent_mode Retrieval mode of parent chunks: full-doc full text retrieval / paragraph paragraph retrieval - subchunk_segmentation (object) Child chunk rules - separator Segmentation identifier. Currently, only one delimiter is allowed. The default is *** - max_tokens The maximum length (tokens) must be validated to be shorter than the length of the parent chunk - chunk_overlap Define the overlap between adjacent chunks (optional) When no parameters are set for the knowledge base, the first upload requires the following parameters to be provided; if not provided, the default parameters will be used. Retrieval model - search_method (string) Search method - hybrid_search Hybrid search - semantic_search Semantic search - full_text_search Full-text search - reranking_enable (bool) Whether to enable reranking - reranking_mode (object) Rerank model configuration - reranking_provider_name (string) Rerank model provider - reranking_model_name (string) Rerank model name - top_k (int) Number of results to return - score_threshold_enabled (bool) Whether to enable score threshold - score_threshold (float) Score threshold Embedding model name Embedding model provider ```bash {{ title: 'cURL' }} curl --location --request POST '${props.apiBaseUrl}/datasets/{dataset_id}/document/create-by-text' \ --header 'Authorization: Bearer {api_key}' \ --header 'Content-Type: application/json' \ --data-raw '{ "name": "text", "text": "text", "indexing_technique": "high_quality", "process_rule": { "mode": "automatic" } }' ``` ```json {{ title: 'Response' }} { "document": { "id": "", "position": 1, "data_source_type": "upload_file", "data_source_info": { "upload_file_id": "" }, "dataset_process_rule_id": "", "name": "text.txt", "created_from": "api", "created_by": "", "created_at": 1695690280, "tokens": 0, "indexing_status": "waiting", "error": null, "enabled": true, "disabled_at": null, "disabled_by": null, "archived": false, "display_status": "queuing", "word_count": 0, "hit_count": 0, "doc_form": "text_model" }, "batch": "" } ```
This API is based on an existing knowledge and creates a new document through a file based on this knowledge. ### Params Knowledge ID ### Request Body - original_document_id Source document ID (optional) - Used to re-upload the document or modify the document cleaning and segmentation configuration. The missing information is copied from the source document - The source document cannot be an archived document - When original_document_id is passed in, the update operation is performed on behalf of the document. process_rule is a fillable item. If not filled in, the segmentation method of the source document will be used by default - When original_document_id is not passed in, the new operation is performed on behalf of the document, and process_rule is required - indexing_technique Index mode - high_quality High quality: embedding using embedding model, built as vector database index - economy Economy: Build using inverted index of keyword table index - doc_form Format of indexed content - text_model Text documents are directly embedded; `economy` mode defaults to using this form - hierarchical_model Parent-child mode - qa_model Q&A Mode: Generates Q&A pairs for segmented documents and then embeds the questions - doc_type Type of document (optional) - book Book Document records a book or publication - web_page Web page Document records web page content - paper Academic paper/article Document records academic paper or research article - social_media_post Social media post Content from social media posts - wikipedia_entry Wikipedia entry Content from Wikipedia entries - personal_document Personal document Documents related to personal content - business_document Business document Documents related to business content - im_chat_log Chat log Records of instant messaging chats - synced_from_notion Notion document Documents synchronized from Notion - synced_from_github GitHub document Documents synchronized from GitHub - others Other document types Other document types not listed above - doc_metadata Document metadata (required if doc_type is provided) Fields vary by doc_type: For book: - title Book title Title of the book - language Book language Language of the book - author Book author Author of the book - publisher Publisher name Name of the publishing house - publication_date Publication date Date when the book was published - isbn ISBN number International Standard Book Number - category Book category Category or genre of the book For web_page: - title Page title Title of the web page - url Page URL URL address of the web page - language Page language Language of the web page - publish_date Publish date Date when the web page was published - author/publisher Author or publisher Author or publisher of the web page - topic/keywords Topic or keywords Topics or keywords of the web page - description Page description Description of the web page content Please check [api/services/dataset_service.py](https://github.com/langgenius/dify/blob/main/api/services/dataset_service.py#L475) for more details on the fields required for each doc_type. For doc_type "others", any valid JSON object is accepted - doc_language In Q&A mode, specify the language of the document, for example: English, Chinese - process_rule Processing rules - mode (string) Cleaning, segmentation mode, automatic / custom - rules (object) Custom rules (in automatic mode, this field is empty) - pre_processing_rules (array[object]) Preprocessing rules - id (string) Unique identifier for the preprocessing rule - enumerate - remove_extra_spaces Replace consecutive spaces, newlines, tabs - remove_urls_emails Delete URL, email address - enabled (bool) Whether to select this rule or not. If no document ID is passed in, it represents the default value. - segmentation (object) Segmentation rules - separator Custom segment identifier, currently only allows one delimiter to be set. Default is \n - max_tokens Maximum length (token) defaults to 1000 - parent_mode Retrieval mode of parent chunks: full-doc full text retrieval / paragraph paragraph retrieval - subchunk_segmentation (object) Child chunk rules - separator Segmentation identifier. Currently, only one delimiter is allowed. The default is *** - max_tokens The maximum length (tokens) must be validated to be shorter than the length of the parent chunk - chunk_overlap Define the overlap between adjacent chunks (optional) Files that need to be uploaded. When no parameters are set for the knowledge base, the first upload requires the following parameters to be provided; if not provided, the default parameters will be used. Retrieval model - search_method (string) Search method - hybrid_search Hybrid search - semantic_search Semantic search - full_text_search Full-text search - reranking_enable (bool) Whether to enable reranking - reranking_mode (object) Rerank model configuration - reranking_provider_name (string) Rerank model provider - reranking_model_name (string) Rerank model name - top_k (int) Number of results to return - score_threshold_enabled (bool) Whether to enable score threshold - score_threshold (float) Score threshold Embedding model name Embedding model provider ```bash {{ title: 'cURL' }} curl --location --request POST '${props.apiBaseUrl}/datasets/{dataset_id}/document/create-by-file' \ --header 'Authorization: Bearer {api_key}' \ --form 'data="{\"name\":\"Dify\",\"indexing_technique\":\"high_quality\",\"process_rule\":{\"rules\":{\"pre_processing_rules\":[{\"id\":\"remove_extra_spaces\",\"enabled\":true},{\"id\":\"remove_urls_emails\",\"enabled\":true}],\"segmentation\":{\"separator\":\"###\",\"max_tokens\":500}},\"mode\":\"custom\"}}";type=text/plain' \ --form 'file=@"/path/to/file"' ``` ```json {{ title: 'Response' }} { "document": { "id": "", "position": 1, "data_source_type": "upload_file", "data_source_info": { "upload_file_id": "" }, "dataset_process_rule_id": "", "name": "Dify.txt", "created_from": "api", "created_by": "", "created_at": 1695308667, "tokens": 0, "indexing_status": "waiting", "error": null, "enabled": true, "disabled_at": null, "disabled_by": null, "archived": false, "display_status": "queuing", "word_count": 0, "hit_count": 0, "doc_form": "text_model" }, "batch": "" } ```
### Request Body Knowledge name Knowledge description (optional) Type of document (optional): - book Book - web_page Web page - paper Academic paper/article - social_media_post Social media post - wikipedia_entry Wikipedia entry - personal_document Personal document - business_document Business document - im_chat_log Chat log - synced_from_notion Notion document - synced_from_github GitHub document - others Other document types Document metadata (required if doc_type is provided). Fields vary by doc_type: For book: - title Book title - language Book language - author Book author - publisher Publisher name - publication_date Publication date - isbn ISBN number - category Book category For web_page: - title Page title - url Page URL - language Page language - publish_date Publish date - author/publisher Author or publisher - topic/keywords Topic or keywords - description Page description Please check [api/services/dataset_service.py](https://github.com/langgenius/dify/blob/main/api/services/dataset_service.py#L475) for more details on the fields required for each doc_type. For doc_type "others", any valid JSON object is accepted Index technique (optional) - high_quality High quality - economy Economy Permission - only_me Only me - all_team_members All team members - partial_members Partial members Provider (optional, default: vendor) - vendor Vendor - external External knowledge External knowledge API ID (optional) External knowledge ID (optional) ```bash {{ title: 'cURL' }} curl --location --request POST '${apiBaseUrl}/v1/datasets' \ --header 'Authorization: Bearer {api_key}' \ --header 'Content-Type: application/json' \ --data-raw '{ "name": "name", "permission": "only_me" }' ``` ```json {{ title: 'Response' }} { "id": "", "name": "name", "description": null, "provider": "vendor", "permission": "only_me", "data_source_type": null, "indexing_technique": null, "app_count": 0, "document_count": 0, "word_count": 0, "created_by": "", "created_at": 1695636173, "updated_by": "", "updated_at": 1695636173, "embedding_model": null, "embedding_model_provider": null, "embedding_available": null } ```
### Query Page number Number of items returned, default 20, range 1-100 ```bash {{ title: 'cURL' }} curl --location --request GET '${props.apiBaseUrl}/datasets?page=1&limit=20' \ --header 'Authorization: Bearer {api_key}' ``` ```json {{ title: 'Response' }} { "data": [ { "id": "", "name": "name", "description": "desc", "permission": "only_me", "data_source_type": "upload_file", "indexing_technique": "", "app_count": 2, "document_count": 10, "word_count": 1200, "created_by": "", "created_at": "", "updated_by": "", "updated_at": "" }, ... ], "has_more": true, "limit": 20, "total": 50, "page": 1 } ```
### Params Knowledge ID ```bash {{ title: 'cURL' }} curl --location --request DELETE '${props.apiBaseUrl}/datasets/{dataset_id}' \ --header 'Authorization: Bearer {api_key}' ``` ```text {{ title: 'Response' }} 204 No Content ```
This API is based on an existing knowledge and updates the document through text based on this knowledge. ### Params Knowledge ID Document ID ### Request Body Document name (optional) Document content (optional) Processing rules - mode (string) Cleaning, segmentation mode, automatic / custom - rules (object) Custom rules (in automatic mode, this field is empty) - pre_processing_rules (array[object]) Preprocessing rules - id (string) Unique identifier for the preprocessing rule - enumerate - remove_extra_spaces Replace consecutive spaces, newlines, tabs - remove_urls_emails Delete URL, email address - enabled (bool) Whether to select this rule or not. If no document ID is passed in, it represents the default value. - segmentation (object) Segmentation rules - separator Custom segment identifier, currently only allows one delimiter to be set. Default is \n - max_tokens Maximum length (token) defaults to 1000 - parent_mode Retrieval mode of parent chunks: full-doc full text retrieval / paragraph paragraph retrieval - subchunk_segmentation (object) Child chunk rules - separator Segmentation identifier. Currently, only one delimiter is allowed. The default is *** - max_tokens The maximum length (tokens) must be validated to be shorter than the length of the parent chunk - chunk_overlap Define the overlap between adjacent chunks (optional) ```bash {{ title: 'cURL' }} curl --location --request POST '${props.apiBaseUrl}/datasets/{dataset_id}/documents/{document_id}/update-by-text' \ --header 'Authorization: Bearer {api_key}' \ --header 'Content-Type: application/json' \ --data-raw '{ "name": "name", "text": "text" }' ``` ```json {{ title: 'Response' }} { "document": { "id": "", "position": 1, "data_source_type": "upload_file", "data_source_info": { "upload_file_id": "" }, "dataset_process_rule_id": "", "name": "name.txt", "created_from": "api", "created_by": "", "created_at": 1695308667, "tokens": 0, "indexing_status": "waiting", "error": null, "enabled": true, "disabled_at": null, "disabled_by": null, "archived": false, "display_status": "queuing", "word_count": 0, "hit_count": 0, "doc_form": "text_model" }, "batch": "" } ```
This API is based on an existing knowledge, and updates documents through files based on this knowledge ### Params Knowledge ID Document ID ### Request Body Document name (optional) Files to be uploaded Processing rules - mode (string) Cleaning, segmentation mode, automatic / custom - rules (object) Custom rules (in automatic mode, this field is empty) - pre_processing_rules (array[object]) Preprocessing rules - id (string) Unique identifier for the preprocessing rule - enumerate - remove_extra_spaces Replace consecutive spaces, newlines, tabs - remove_urls_emails Delete URL, email address - enabled (bool) Whether to select this rule or not. If no document ID is passed in, it represents the default value. - segmentation (object) Segmentation rules - separator Custom segment identifier, currently only allows one delimiter to be set. Default is \n - max_tokens Maximum length (token) defaults to 1000 - parent_mode Retrieval mode of parent chunks: full-doc full text retrieval / paragraph paragraph retrieval - subchunk_segmentation (object) Child chunk rules - separator Segmentation identifier. Currently, only one delimiter is allowed. The default is *** - max_tokens The maximum length (tokens) must be validated to be shorter than the length of the parent chunk - chunk_overlap Define the overlap between adjacent chunks (optional) - doc_type Type of document (optional) - book Book Document records a book or publication - web_page Web page Document records web page content - paper Academic paper/article Document records academic paper or research article - social_media_post Social media post Content from social media posts - wikipedia_entry Wikipedia entry Content from Wikipedia entries - personal_document Personal document Documents related to personal content - business_document Business document Documents related to business content - im_chat_log Chat log Records of instant messaging chats - synced_from_notion Notion document Documents synchronized from Notion - synced_from_github GitHub document Documents synchronized from GitHub - others Other document types Other document types not listed above - doc_metadata Document metadata (required if doc_type is provided) Fields vary by doc_type: For book: - title Book title Title of the book - language Book language Language of the book - author Book author Author of the book - publisher Publisher name Name of the publishing house - publication_date Publication date Date when the book was published - isbn ISBN number International Standard Book Number - category Book category Category or genre of the book For web_page: - title Page title Title of the web page - url Page URL URL address of the web page - language Page language Language of the web page - publish_date Publish date Date when the web page was published - author/publisher Author or publisher Author or publisher of the web page - topic/keywords Topic or keywords Topics or keywords of the web page - description Page description Description of the web page content Please check [api/services/dataset_service.py](https://github.com/langgenius/dify/blob/main/api/services/dataset_service.py#L475) for more details on the fields required for each doc_type. For doc_type "others", any valid JSON object is accepted ```bash {{ title: 'cURL' }} curl --location --request POST '${props.apiBaseUrl}/datasets/{dataset_id}/documents/{document_id}/update-by-file' \ --header 'Authorization: Bearer {api_key}' \ --form 'data="{\"name\":\"Dify\",\"indexing_technique\":\"high_quality\",\"process_rule\":{\"rules\":{\"pre_processing_rules\":[{\"id\":\"remove_extra_spaces\",\"enabled\":true},{\"id\":\"remove_urls_emails\",\"enabled\":true}],\"segmentation\":{\"separator\":\"###\",\"max_tokens\":500}},\"mode\":\"custom\"}}";type=text/plain' \ --form 'file=@"/path/to/file"' ``` ```json {{ title: 'Response' }} { "document": { "id": "", "position": 1, "data_source_type": "upload_file", "data_source_info": { "upload_file_id": "" }, "dataset_process_rule_id": "", "name": "Dify.txt", "created_from": "api", "created_by": "", "created_at": 1695308667, "tokens": 0, "indexing_status": "waiting", "error": null, "enabled": true, "disabled_at": null, "disabled_by": null, "archived": false, "display_status": "queuing", "word_count": 0, "hit_count": 0, "doc_form": "text_model" }, "batch": "20230921150427533684" } ```
### Params Knowledge ID Batch number of uploaded documents ```bash {{ title: 'cURL' }} curl --location --request GET '${props.apiBaseUrl}/datasets/{dataset_id}/documents/{batch}/indexing-status' \ --header 'Authorization: Bearer {api_key}' \ ``` ```json {{ title: 'Response' }} { "data":[{ "id": "", "indexing_status": "indexing", "processing_started_at": 1681623462.0, "parsing_completed_at": 1681623462.0, "cleaning_completed_at": 1681623462.0, "splitting_completed_at": 1681623462.0, "completed_at": null, "paused_at": null, "error": null, "stopped_at": null, "completed_segments": 24, "total_segments": 100 }] } ```
### Params Knowledge ID Document ID ```bash {{ title: 'cURL' }} curl --location --request DELETE '${props.apiBaseUrl}/datasets/{dataset_id}/documents/{document_id}' \ --header 'Authorization: Bearer {api_key}' \ ``` ```json {{ title: 'Response' }} { "result": "success" } ```
### Params Knowledge ID ### Query Search keywords, currently only search document names (optional) Page number (optional) Number of items returned, default 20, range 1-100 (optional) ```bash {{ title: 'cURL' }} curl --location --request GET '${props.apiBaseUrl}/datasets/{dataset_id}/documents' \ --header 'Authorization: Bearer {api_key}' \ ``` ```json {{ title: 'Response' }} { "data": [ { "id": "", "position": 1, "data_source_type": "file_upload", "data_source_info": null, "dataset_process_rule_id": null, "name": "dify", "created_from": "", "created_by": "", "created_at": 1681623639, "tokens": 0, "indexing_status": "waiting", "error": null, "enabled": true, "disabled_at": null, "disabled_by": null, "archived": false }, ], "has_more": false, "limit": 20, "total": 9, "page": 1 } ```
### Params Knowledge ID Document ID ### Request Body - content (text) Text content / question content, required - answer (text) Answer content, if the mode of the knowledge is Q&A mode, pass the value (optional) - keywords (list) Keywords (optional) ```bash {{ title: 'cURL' }} curl --location --request POST '${props.apiBaseUrl}/datasets/{dataset_id}/documents/{document_id}/segments' \ --header 'Authorization: Bearer {api_key}' \ --header 'Content-Type: application/json' \ --data-raw '{ "segments": [ { "content": "1", "answer": "1", "keywords": ["a"] } ] }' ``` ```json {{ title: 'Response' }} { "data": [{ "id": "", "position": 1, "document_id": "", "content": "1", "answer": "1", "word_count": 25, "tokens": 0, "keywords": [ "a" ], "index_node_id": "", "index_node_hash": "", "hit_count": 0, "enabled": true, "disabled_at": null, "disabled_by": null, "status": "completed", "created_by": "", "created_at": 1695312007, "indexing_at": 1695312007, "completed_at": 1695312007, "error": null, "stopped_at": null }], "doc_form": "text_model" } ```
### Path Knowledge ID Document ID ### Query Keyword (optional) Search status, completed ```bash {{ title: 'cURL' }} curl --location --request GET '${props.apiBaseUrl}/datasets/{dataset_id}/documents/{document_id}/segments' \ --header 'Authorization: Bearer {api_key}' \ --header 'Content-Type: application/json' ``` ```json {{ title: 'Response' }} { "data": [{ "id": "", "position": 1, "document_id": "", "content": "1", "answer": "1", "word_count": 25, "tokens": 0, "keywords": [ "a" ], "index_node_id": "", "index_node_hash": "", "hit_count": 0, "enabled": true, "disabled_at": null, "disabled_by": null, "status": "completed", "created_by": "", "created_at": 1695312007, "indexing_at": 1695312007, "completed_at": 1695312007, "error": null, "stopped_at": null }], "doc_form": "text_model" } ```
### Path Knowledge ID Document ID Document Segment ID ```bash {{ title: 'cURL' }} curl --location --request DELETE '${props.apiBaseUrl}/datasets/{dataset_id}/segments/{segment_id}' \ --header 'Authorization: Bearer {api_key}' \ --header 'Content-Type: application/json' ``` ```json {{ title: 'Response' }} { "result": "success" } ```
### POST Knowledge ID Document ID Document Segment ID ### Request Body - content (text) Text content / question content, required - answer (text) Answer content, passed if the knowledge is in Q&A mode (optional) - keywords (list) Keyword (optional) - enabled (bool) False / true (optional) - regenerate_child_chunks (bool) Whether to regenerate child chunks (optional) ```bash {{ title: 'cURL' }} curl --location --request POST '${props.apiBaseUrl}/datasets/{dataset_id}/documents/{document_id}/segments/{segment_id}' \ --header 'Content-Type: application/json' \ --data-raw '{ "segment": { "content": "1", "answer": "1", "keywords": ["a"], "enabled": false } }' ``` ```json {{ title: 'Response' }} { "data": [{ "id": "", "position": 1, "document_id": "", "content": "1", "answer": "1", "word_count": 25, "tokens": 0, "keywords": [ "a" ], "index_node_id": "", "index_node_hash": "", "hit_count": 0, "enabled": true, "disabled_at": null, "disabled_by": null, "status": "completed", "created_by": "", "created_at": 1695312007, "indexing_at": 1695312007, "completed_at": 1695312007, "error": null, "stopped_at": null }], "doc_form": "text_model" } ```
### Path Knowledge ID Document ID ```bash {{ title: 'cURL' }} curl --location --request GET '${props.apiBaseUrl}/datasets/{dataset_id}/documents/{document_id}/upload-file' \ --header 'Authorization: Bearer {api_key}' \ --header 'Content-Type: application/json' ``` ```json {{ title: 'Response' }} { "id": "file_id", "name": "file_name", "size": 1024, "extension": "txt", "url": "preview_url", "download_url": "download_url", "mime_type": "text/plain", "created_by": "user_id", "created_at": 1728734540, } ```
### Path Knowledge ID ### Request Body Query keyword Retrieval model (optional, if not filled, it will be recalled according to the default method) - search_method (text) Search method: One of the following four keywords is required - keyword_search Keyword search - semantic_search Semantic search - full_text_search Full-text search - hybrid_search Hybrid search - reranking_enable (bool) Whether to enable reranking, required if the search mode is semantic_search or hybrid_search (optional) - reranking_mode (object) Rerank model configuration, required if reranking is enabled - reranking_provider_name (string) Rerank model provider - reranking_model_name (string) Rerank model name - weights (float) Semantic search weight setting in hybrid search mode - top_k (integer) Number of results to return (optional) - score_threshold_enabled (bool) Whether to enable score threshold - score_threshold (float) Score threshold Unused field ```bash {{ title: 'cURL' }} curl --location --request POST '${props.apiBaseUrl}/datasets/{dataset_id}/retrieve' \ --header 'Authorization: Bearer {api_key}' \ --header 'Content-Type: application/json' \ --data-raw '{ "query": "test", "retrieval_model": { "search_method": "keyword_search", "reranking_enable": false, "reranking_mode": null, "reranking_model": { "reranking_provider_name": "", "reranking_model_name": "" }, "weights": null, "top_k": 2, "score_threshold_enabled": false, "score_threshold": null } }' ``` ```json {{ title: 'Response' }} { "query": { "content": "test" }, "records": [ { "segment": { "id": "7fa6f24f-8679-48b3-bc9d-bdf28d73f218", "position": 1, "document_id": "a8c6c36f-9f5d-4d7a-8472-f5d7b75d71d2", "content": "Operation guide", "answer": null, "word_count": 847, "tokens": 280, "keywords": [ "install", "java", "base", "scripts", "jdk", "manual", "internal", "opens", "add", "vmoptions" ], "index_node_id": "39dd8443-d960-45a8-bb46-7275ad7fbc8e", "index_node_hash": "0189157697b3c6a418ccf8264a09699f25858975578f3467c76d6bfc94df1d73", "hit_count": 0, "enabled": true, "disabled_at": null, "disabled_by": null, "status": "completed", "created_by": "dbcb1ab5-90c8-41a7-8b78-73b235eb6f6f", "created_at": 1728734540, "indexing_at": 1728734552, "completed_at": 1728734584, "error": null, "stopped_at": null, "document": { "id": "a8c6c36f-9f5d-4d7a-8472-f5d7b75d71d2", "data_source_type": "upload_file", "name": "readme.txt", "doc_type": null } }, "score": 3.730463140527718e-05, "tsne_position": null } ] } ```
### Error message Error code Error status Error message ```json {{ title: 'Response' }} { "code": "no_file_uploaded", "message": "Please upload your file.", "status": 400 } ```
code status message
no_file_uploaded 400 Please upload your file.
too_many_files 400 Only one file is allowed.
file_too_large 413 File size exceeded.
unsupported_file_type 415 File type not allowed.
high_quality_dataset_only 400 Current operation only supports 'high-quality' datasets.
dataset_not_initialized 400 The dataset is still being initialized or indexing. Please wait a moment.
archived_document_immutable 403 The archived document is not editable.
dataset_name_duplicate 409 The dataset name already exists. Please modify your dataset name.
invalid_action 400 Invalid action.
document_already_finished 400 The document has been processed. Please refresh the page or go to the document details.
document_indexing 400 The document is being processed and cannot be edited.
invalid_metadata 400 The metadata content is incorrect. Please check and verify.