Replies: 1 comment 4 replies
-
There is a user defined function feature for command line code executor. See the functions parameter. It allows you to have access to predefined modules and functions inside the code executor. Maybe you can write your "sub process" workflow as a function. For tracing, see https://microsoft.github.io/autogen/stable/user-guide/core-user-guide/framework/telemetry.html |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I have the following setup:
I want the code writer agent to be able to write code that can ask other LLMs. Imagine something like this:
It is a little bit simplistic here, but the basic idea is that an LLM can be called from within code and the result is available as a variable that can be directly used in the same code snippet.
The issue is that these calls the OpenAI (in the future it could also be a full autogen chat) are decoupled from my main process. So nothing is logged for instance.
It would be really cool if I could integrate it somehow in my main setup such that its part of the inner messages of code executor agent for instance. Open for ideas here.
Currently my approach would be probably to save the output of this agent in some file and then try to parse that again in the code executor agent. But I am open for more ideas!
I also heard autogen supports some tracing solutions. So maybe that could also help here? I am not too familiar with it.
Beta Was this translation helpful? Give feedback.
All reactions