J. Camilo V. Tieck07/11/2023, 10:49 PM
Deepyaman Datta07/11/2023, 10:57 PM
-> we have tried a lambda function that has the code for [3.0] and has access to the S3 location . this works, but the code for the post-processing [3.0] is duplicated.Why is the code duplicated? I don't think I understood from your writeup. Is the problem that you don't package the code for [3.0] in a way that it can be run from both the service doing scheduled execution and the one for on-demand execution?
J. Camilo V. Tieck07/11/2023, 11:34 PM
Deepyaman Datta07/11/2023, 11:41 PM
• I tried micro-packaging the pipeline and importing it in the lambda, but the dependency was too large and jenkins had some issues with it.This is in line with the approach I'd take. You could push the generated wheel to a shared space, like S3 or even an internal PyPI respository. That wheel would be used by both deployments. You could alternatively have a CI process that builds the Docker image for EC2 + Docker image for Lambda (I don't have enough recent hands-on experience with AWS to say whether you coul duse the same image), and use those in your deployment. There are also "hackier" ways of doing things, like dynamically referencing the pipeline code from a shared location, but not sure why should do that instead of the above two options.
J. Camilo V. Tieck07/11/2023, 11:48 PM
Deepyaman Datta07/11/2023, 11:52 PM
J. Camilo V. Tieck07/11/2023, 11:56 PM
Deepyaman Datta07/12/2023, 12:42 AM
Juan Luis07/12/2023, 6:16 AM
J. Camilo V. Tieck07/12/2023, 3:36 PM
Juan Luis07/12/2023, 3:48 PM
J. Camilo V. Tieck07/12/2023, 9:31 PM