Package Methods (0.8.2)

Summary of entries of Methods for langchain-google-spanner.

langchain_google_spanner.loader._load_doc_to_row

_load_doc_to_row( table_fields: typing.List[str], doc: langchain_core.documents.base.Document, content_column: str, metadata_json_column: str, parse_json: bool = True, ) -> tuple

Load document to row.

See more: langchain_google_spanner.loader._load_doc_to_row

langchain_google_spanner.chat_message_history.SpannerChatMessageHistory._verify_schema

_verify_schema() -> None

Verify table exists with required schema for SpannerChatMessageHistory class.

See more: langchain_google_spanner.chat_message_history.SpannerChatMessageHistory._verify_schema

langchain_google_spanner.chat_message_history.SpannerChatMessageHistory.add_message

add_message(message: langchain_core.messages.base.BaseMessage) -> None

Append the message to the record in Cloud Spanner.

See more: langchain_google_spanner.chat_message_history.SpannerChatMessageHistory.add_message

langchain_google_spanner.chat_message_history.SpannerChatMessageHistory.clear

clear() -> None

Clear session memory from Cloud Spanner.

See more: langchain_google_spanner.chat_message_history.SpannerChatMessageHistory.clear

langchain_google_spanner.chat_message_history.SpannerChatMessageHistory.create_chat_history_table

create_chat_history_table( instance_id: str, database_id: str, table_name: str, client: typing.Optional[google.cloud.spanner_v1.client.Client] = None, ) -> None

Create a chat history table in a Cloud Spanner database.

See more: langchain_google_spanner.chat_message_history.SpannerChatMessageHistory.create_chat_history_table

langchain_google_spanner.loader.SpannerDocumentSaver

SpannerDocumentSaver( instance_id: str, database_id: str, table_name: str, content_column: str = "page_content", metadata_columns: typing.List[str] = [], metadata_json_column: str = "langchain_metadata", primary_key: typing.Optional[str] = None, client: typing.Optional[google.cloud.spanner_v1.client.Client] = None, )

Initialize Spanner document saver.

See more: langchain_google_spanner.loader.SpannerDocumentSaver

langchain_google_spanner.loader.SpannerDocumentSaver.add_documents

add_documents(documents: typing.List[langchain_core.documents.base.Document])

Add documents to the Spanner table.

See more: langchain_google_spanner.loader.SpannerDocumentSaver.add_documents

langchain_google_spanner.loader.SpannerDocumentSaver.create_table

create_table( client: google.cloud.spanner_v1.client.Client, instance_id: str, database_id: str, table_name: str, primary_key: str, metadata_json_column: str, content_column: str, metadata_columns: typing.List[langchain_google_spanner.loader.Column], )

Create a new table in Spanner database.

See more: langchain_google_spanner.loader.SpannerDocumentSaver.create_table

langchain_google_spanner.loader.SpannerDocumentSaver.delete

delete(documents: typing.List[langchain_core.documents.base.Document])

Delete documents from the table.

See more: langchain_google_spanner.loader.SpannerDocumentSaver.delete

langchain_google_spanner.loader.SpannerDocumentSaver.init_document_table

init_document_table( instance_id: str, database_id: str, table_name: str, content_column: str = "page_content", metadata_columns: typing.List[langchain_google_spanner.loader.Column] = [], primary_key: str = "", store_metadata: bool = True, metadata_json_column: str = "langchain_metadata", )

Create a new table to store docs with a custom schema.

See more: langchain_google_spanner.loader.SpannerDocumentSaver.init_document_table

langchain_google_spanner.loader.SpannerLoader

SpannerLoader( instance_id: str, database_id: str, query: str, content_columns: typing.List[str] = [], metadata_columns: typing.List[str] = [], format: str = "text", databoost: bool = False, metadata_json_column: str = "langchain_metadata", staleness: typing.Union[float, datetime.datetime] = 0.0, client: typing.Optional[google.cloud.spanner_v1.client.Client] = None, )

Initialize Spanner document loader.

See more: langchain_google_spanner.loader.SpannerLoader

langchain_google_spanner.loader.SpannerLoader.lazy_load

lazy_load() -> typing.Iterator[langchain_core.documents.base.Document]

A lazy loader for langchain documents from a Spanner database.

See more: langchain_google_spanner.loader.SpannerLoader.lazy_load

langchain_google_spanner.loader.SpannerLoader.load

load() -> typing.List[langchain_core.documents.base.Document]

Load langchain documents from a Spanner database.

See more: langchain_google_spanner.loader.SpannerLoader.load

langchain_google_spanner.vector_store.DialectSemantics.getDistanceFunction

getDistanceFunction(distance_strategy=DistanceStrategy.EUCLIDEAN) -> str

Abstract method to get the distance function based on the provided distance strategy.

See more: langchain_google_spanner.vector_store.DialectSemantics.getDistanceFunction

langchain_google_spanner.vector_store.GoogleSqlSemantics.getDistanceFunction

getDistanceFunction(distance_strategy=DistanceStrategy.EUCLIDEAN) -> str

Abstract method to get the distance function based on the provided distance strategy.

See more: langchain_google_spanner.vector_store.GoogleSqlSemantics.getDistanceFunction

langchain_google_spanner.vector_store.PGSqlSemantics.getDistanceFunction

getDistanceFunction(distance_strategy=DistanceStrategy.EUCLIDEAN) -> str

Abstract method to get the distance function based on the provided distance strategy.

See more: langchain_google_spanner.vector_store.PGSqlSemantics.getDistanceFunction

langchain_google_spanner.vector_store.QueryParameters

QueryParameters( algorithm=NearestNeighborsAlgorithm.EXACT_NEAREST_NEIGHBOR, distance_strategy=DistanceStrategy.EUCLIDEAN, read_timestamp: typing.Optional[datetime.datetime] = None, min_read_timestamp: typing.Optional[datetime.datetime] = None, max_staleness: typing.Optional[datetime.timedelta] = None, exact_staleness: typing.Optional[datetime.timedelta] = None, )

Initialize query parameters.

See more: langchain_google_spanner.vector_store.QueryParameters

langchain_google_spanner.vector_store.SpannerVectorStore._generate_sql

_generate_sql( dialect, table_name, id_column, content_column, embedding_column, column_configs, primary_key, secondary_indexes: typing.Optional[ typing.List[ langchain_google_spanner.vector_store.SecondaryIndex | langchain_google_spanner.vector_store.VectorSearchIndex ] ] = None, vector_size: typing.Optional[int] = None, ) -> typing.List[str]

Generate SQL for creating the vector store table.

See more: langchain_google_spanner.vector_store.SpannerVectorStore._generate_sql

langchain_google_spanner.vector_store.SpannerVectorStore._select_relevance_score_fn

_select_relevance_score_fn() -> typing.Callable[[float], float]

langchain_google_spanner.vector_store.SpannerVectorStore.add_documents

add_documents( documents: typing.List[langchain_core.documents.base.Document], ids: typing.Optional[typing.List[str]] = None, **kwargs: typing.Any ) -> typing.List[str]

langchain_google_spanner.vector_store.SpannerVectorStore.add_texts

add_texts( texts: typing.Iterable[str], metadatas: typing.Optional[typing.List[dict]] = None, ids: typing.Optional[typing.List[str]] = None, batch_size: int = 5000, **kwargs: typing.Any ) -> typing.List[str]

Add texts to the vector store index.

See more: langchain_google_spanner.vector_store.SpannerVectorStore.add_texts

langchain_google_spanner.vector_store.SpannerVectorStore.delete

delete( ids: typing.Optional[typing.List[str]] = None, documents: typing.Optional[ typing.List[langchain_core.documents.base.Document] ] = None, **kwargs: typing.Any ) -> typing.Optional[bool]

Delete records from the vector store.

See more: langchain_google_spanner.vector_store.SpannerVectorStore.delete

langchain_google_spanner.vector_store.SpannerVectorStore.from_documents

from_documents(documents: typing.List[langchain_core.documents.base.Document], embedding: langchain_core.embeddings.embeddings.Embeddings, instance_id: str, database_id: str, table_name: str, id_column: str = 'langchain_id', content_column: str = 'content', embedding_column: str = 'embedding', ids: typing.Optional[typing.List[str]] = None, client: typing.Optional[google.cloud.spanner_v1.client.Client] = None, metadata_columns: typing.Optional[typing.List[str]] = None, ignore_metadata_columns: typing.Optional[typing.List[str]] = None, metadata_json_column: typing.Optional[str] = None, query_parameter: langchain_google_spanner.vector_store.QueryParameters = 

Initialize SpannerVectorStore from a list of documents.

See more: langchain_google_spanner.vector_store.SpannerVectorStore.from_documents

langchain_google_spanner.vector_store.SpannerVectorStore.from_texts

from_texts(texts: typing.List[str], embedding: langchain_core.embeddings.embeddings.Embeddings, instance_id: str, database_id: str, table_name: str, metadatas: typing.Optional[typing.List[dict]] = None, id_column: str = 'langchain_id', content_column: str = 'content', embedding_column: str = 'embedding', ids: typing.Optional[typing.List[str]] = None, client: typing.Optional[google.cloud.spanner_v1.client.Client] = None, metadata_columns: typing.Optional[typing.List[str]] = None, ignore_metadata_columns: typing.Optional[typing.List[str]] = None, metadata_json_column: typing.Optional[str] = None, query_parameter: langchain_google_spanner.vector_store.QueryParameters = 

Initialize SpannerVectorStore from a list of texts.

See more: langchain_google_spanner.vector_store.SpannerVectorStore.from_texts

langchain_google_spanner.vector_store.SpannerVectorStore.init_vector_store_table

init_vector_store_table( instance_id: str, database_id: str, table_name: str, client: typing.Optional[google.cloud.spanner_v1.client.Client] = None, id_column: typing.Union[ str, langchain_google_spanner.vector_store.TableColumn ] = "langchain_id", content_column: str = "content", embedding_column: str = "embedding", metadata_columns: typing.Optional[ typing.List[langchain_google_spanner.vector_store.TableColumn] ] = None, primary_key: typing.Optional[str] = None, vector_size: typing.Optional[int] = None, secondary_indexes: typing.Optional[ typing.List[ langchain_google_spanner.vector_store.SecondaryIndex | langchain_google_spanner.vector_store.VectorSearchIndex ] ] = None, ) -> bool

Initialize the vector store new table in Google Cloud Spanner.

See more: langchain_google_spanner.vector_store.SpannerVectorStore.init_vector_store_table

langchain_google_spanner.vector_store.SpannerVectorStore.max_marginal_relevance_search

max_marginal_relevance_search( query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, pre_filter: typing.Optional[str] = None, **kwargs: typing.Any ) -> typing.List[langchain_core.documents.base.Document]

Return docs selected using the maximal marginal relevance.

See more: langchain_google_spanner.vector_store.SpannerVectorStore.max_marginal_relevance_search

langchain_google_spanner.vector_store.SpannerVectorStore.max_marginal_relevance_search_by_vector

max_marginal_relevance_search_by_vector( embedding: typing.List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, pre_filter: typing.Optional[str] = None, **kwargs: typing.Any ) -> typing.List[langchain_core.documents.base.Document]

Return docs selected using the maximal marginal relevance.

See more: langchain_google_spanner.vector_store.SpannerVectorStore.max_marginal_relevance_search_by_vector

langchain_google_spanner.vector_store.SpannerVectorStore.max_marginal_relevance_search_with_score_by_vector

max_marginal_relevance_search_with_score_by_vector( embedding: typing.List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, pre_filter: typing.Optional[str] = None, **kwargs ) -> typing.List[typing.Tuple[langchain_core.documents.base.Document, float]]

Return docs and their similarity scores selected using the maximal marginal relevance.

See more: langchain_google_spanner.vector_store.SpannerVectorStore.max_marginal_relevance_search_with_score_by_vector

langchain_google_spanner.vector_store.SpannerVectorStore.similarity_search

similarity_search( query: str, k: int = 4, pre_filter: typing.Optional[str] = None, **kwargs: typing.Any ) -> typing.List[langchain_core.documents.base.Document]

Perform similarity search for a given query.

See more: langchain_google_spanner.vector_store.SpannerVectorStore.similarity_search

langchain_google_spanner.vector_store.SpannerVectorStore.similarity_search_by_vector

similarity_search_by_vector( embedding: typing.List[float], k: int = 4, pre_filter: typing.Optional[str] = None, **kwargs: typing.Any ) -> typing.List[langchain_core.documents.base.Document]

langchain_google_spanner.vector_store.SpannerVectorStore.similarity_search_with_score

similarity_search_with_score( query: str, k: int = 4, pre_filter: typing.Optional[str] = None, **kwargs: typing.Any ) -> typing.List[typing.Tuple[langchain_core.documents.base.Document, float]]

Perform similarity search for a given query with scores.

See more: langchain_google_spanner.vector_store.SpannerVectorStore.similarity_search_with_score

langchain_google_spanner.vector_store.SpannerVectorStore.similarity_search_with_score_by_vector

similarity_search_with_score_by_vector( embedding: typing.List[float], k: int = 4, pre_filter: typing.Optional[str] = None, **kwargs: typing.Any ) -> typing.List[typing.Tuple[langchain_core.documents.base.Document, float]]