Quantcast
Channel: Recent Questions - Stack Overflow
Viewing all articles
Browse latest Browse all 12141

How to use Open source LLM with self query in langchain

$
0
0

I am trying to replicate the Self Query search example as below:

from langchain_chroma import Chromafrom langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddingsdocs = [    Document(        page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",        metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},    ),    Document(        page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",        metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},    ),    Document(        page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",        metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},    ),    Document(        page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",        metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},    ),    Document(        page_content="Toys come alive and have a blast doing so",        metadata={"year": 1995, "genre": "animated"},    ),    Document(        page_content="Three men walk into the Zone, three men walk out of the Zone",        metadata={"year": 1979,"director": "Andrei Tarkovsky","genre": "thriller","rating": 9.9,        },    ),]vectorstore = Chroma.from_documents(docs, OpenAIEmbeddings())from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import ChatOpenAImetadata_field_info = [    AttributeInfo(        name="genre",        description="The genre of the movie. One of ['science fiction', 'comedy', 'drama', 'thriller', 'romance', 'action', 'animated']",        type="string",    ),    AttributeInfo(        name="year",        description="The year the movie was released",        type="integer",    ),    AttributeInfo(        name="director",        description="The name of the movie director",        type="string",    ),    AttributeInfo(        name="rating", description="A 1-10 rating for the movie", type="float"    ),]document_content_description = "Brief summary of a movie"llm = ChatOpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm(    llm,    vectorstore,    document_content_description,    metadata_field_info,)# This example only specifies a filterretriever.invoke("I want to watch a movie rated higher than 8.5")

Above works. I then tried instead using open source llm using llamacpp as below:

llm = LlamaCpp(    model_path="D:\\opensrc_llms_downloads/llama-2-7b-chat.Q8_0.gguf",    temperature=0.2,    top_p=0.9,    callback_manager=callback_manager,    verbose=True,    return_full_text=False,#     repeat_last_n=56,    max_tokens=256,  ## change this to control the no of tokens generated in output    n_ctx=4096, # llama2 has a context window of 4096 tokens, but we set it lower to allow for some wiggle room    repetition_penalty=1.18,#     model_kwargs={"stopping_criteria":['Answer:',"\n","\n\n"]},#     stop=["\n","\n\n"] #, "Answer:"] )

But i get up ending the error as below:

OutputParserException: Parsing textraised following error:Received unrecognized function contains. Valid functions are [<Operator.AND: 'and'>, <Operator.OR: 'or'>, <Operator.NOT: 'not'>, <Comparator.EQ: 'eq'>, <Comparator.NE: 'ne'>, <Comparator.GT: 'gt'>, <Comparator.GTE: 'gte'>, <Comparator.LT: 'lt'>, <Comparator.LTE: 'lte'>, <Comparator.CONTAIN: 'contain'>, <Comparator.LIKE: 'like'>, <Comparator.IN: 'in'>, <Comparator.NIN: 'nin'>]

Viewing all articles
Browse latest Browse all 12141

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>