-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Semantic kernel model adapter #4851
base: main
Are you sure you want to change the base?
Semantic kernel model adapter #4851
Conversation
I can see it is going to be useful for integrating with non-openai APIs such as Gemini and Mistral, using SK's large ecosystem of connectors. Could you show a few examples of such in the API documentation? This way users can take away a clear picture of why this model adapter is useful. |
f77cd39
to
3fe641e
Compare
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #4851 +/- ##
=======================================
Coverage 72.94% 72.94%
=======================================
Files 115 115
Lines 6780 6780
=======================================
Hits 4946 4946
Misses 1834 1834
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
def sk_client() -> AzureChatCompletion: | ||
deployment_name = os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME") | ||
endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") | ||
api_key = os.getenv("AZURE_OPENAI_API_KEY") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the keys are in the environment, it will actually run the tests, otherwise it uses a mock. I set it like this so I could easily swap between real test and testing with a mock. Let me know if you think this could cause any issues (e.g., unexpected API calls in CI)
semantic-kernel-all = [ | ||
"semantic-kernel[google,hugging_face,mistralai,ollama,onnx,anthropic,usearch,pandas,aws,dapr]>=1.17.1", | ||
] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added all the model extras. There were some extras for search plugins that had dependencies in pre-release azure library versions. When we decide to fully enable semantic kernel we will need to decide if we update the CI or limit functionality until its dependencies are fully released.
OutputT = TypeVar("OutputT", bound=BaseModel) | ||
|
||
|
||
class KernelFunctionFromTool(KernelFunction): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have separate tests for the tool adapter yet but it has good coverage in the model adapter test file. I think it will be useful to add a few dedicated tests when we implement the agent so I can write tests for different semantic kernel plugins, I hope that is OK.
return max_tokens - used_tokens | ||
|
||
@property | ||
def model_info(self) -> ModelInfo: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One details here that I almost forgot. The model info is generally not exposed in a direct way by the semantic kernel ChatCompletionClientBase
. I think this is something that is not often exposed by all providers. And I think the only reliable way to provide accurate information would be to keep a hardcoded map from model to capabilities. Have we thought about this, how to maintain model info for different providers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! If you have a chance could you just add the relevant tool and model pages here so that the API reference docs show up https://github.com/microsoft/autogen/tree/main/python/packages/autogen-core/docs/src/reference/python
We can always do it post merge if you'd rather just get this in to avoid conficts
Why are these changes needed?
Related issue number
Checks