Class AzureContentSafetyService
- Namespace
- FoundationaLLM.Gatekeeper.Core.Services
- Assembly
- FoundationaLLM.Gatekeeper.Core.dll
Implements the IContentSafetyService interface.
public class AzureContentSafetyService : IContentSafetyService
- Inheritance
-
AzureContentSafetyService
- Implements
- Inherited Members
- Extension Methods
Constructors
AzureContentSafetyService(IOrchestrationContext, IHttpClientFactoryService, IOptions<AzureContentSafetySettings>, ILogger<AzureContentSafetyService>)
Constructor for the Azure Content Safety service.
public AzureContentSafetyService(IOrchestrationContext callContext, IHttpClientFactoryService httpClientFactoryService, IOptions<AzureContentSafetySettings> options, ILogger<AzureContentSafetyService> logger)
Parameters
callContextIOrchestrationContextStores context information extracted from the current HTTP request. This information is primarily used to inject HTTP headers into downstream HTTP calls.
httpClientFactoryServiceIHttpClientFactoryServiceThe HTTP client factory service.
optionsIOptions<AzureContentSafetySettings>The configuration options for the Azure Content Safety service.
loggerILogger<AzureContentSafetyService>The logger for the Azure Content Safety service.
Methods
AnalyzeText(string)
Checks if a text is safe or not based on pre-configured content filters.
public Task<AnalyzeTextFilterResult> AnalyzeText(string content)
Parameters
contentstringThe text content that needs to be analyzed.
Returns
- Task<AnalyzeTextFilterResult>
The text analysis restult, which includes a boolean flag that represents if the content is considered safe. In case the content is unsafe, also returns the reason.
DetectPromptInjection(string)
Detects attempted prompt injections and jailbreaks in user prompts.
public Task<string?> DetectPromptInjection(string content)
Parameters
contentstringThe text content that needs to be analyzed.