Emilio Gagliardi
07/31/2023, 8:32 PMYetunde
07/31/2023, 10:01 PMEmilio Gagliardi
07/31/2023, 11:04 PMDebanjan Banerjee
08/07/2023, 10:41 AMEmilio Gagliardi
08/07/2023, 4:21 PMDebanjan Banerjee
08/08/2023, 4:59 PMcredentials.yml
• openai api key
parameters.yml
• set limits on tokens , max tokens per responses etc.
• set limits on timeout etc.
node.py
• create a node where openai api is called in batches
• create a prompt
• setup guardrails (can move this to utils to make it more modular)
• load credentials using configLoader
• invoke API like below
• run batches
so long story short , we are using nodes abstractions to invoke the API and in batches. Surely theres better ways to do this but we are invoking the below and has worked well for usEmilio Gagliardi
08/08/2023, 5:29 PMDebanjan Banerjee
08/08/2023, 9:07 PMDataSette
but turned out killing a mosquito with sword 🙂 . We have coded our prompts after robust testing on the chatgpt interface
• You can either set your api key as an env variable and openai can read it implicitly , i personally dont trust that processs way too much as it feel its less control in my hands and i use kedro's credentials.yml
, its a dev's discretion. No concerns from openai API side
• we are exploring lang chain. Actually in the product im using this right now we have single openai use but we are very swifty moving to about 7 genai usecase 😛 so managing it via langchain will save us a lot of effort i think so yes , its on the radar