Class AzureContentSafetyService
- Namespace
- FoundationaLLM.Gatekeeper.Core.Services
- Assembly
- FoundationaLLM.Gatekeeper.Core.dll
Implements the IContentSafetyService interface.
public class AzureContentSafetyService : IContentSafetyService
- Inheritance
-
AzureContentSafetyService
- Implements
- Inherited Members
- Extension Methods
Constructors
AzureContentSafetyService(IOrchestrationContext, IHttpClientFactoryService, IOptions<AzureContentSafetySettings>, ILogger<AzureContentSafetyService>)
Constructor for the Azure Content Safety service.
public AzureContentSafetyService(IOrchestrationContext callContext, IHttpClientFactoryService httpClientFactoryService, IOptions<AzureContentSafetySettings> options, ILogger<AzureContentSafetyService> logger)
Parameters
callContext
IOrchestrationContextStores context information extracted from the current HTTP request. This information is primarily used to inject HTTP headers into downstream HTTP calls.
httpClientFactoryService
IHttpClientFactoryServiceThe HTTP client factory service.
options
IOptions<AzureContentSafetySettings>The configuration options for the Azure Content Safety service.
logger
ILogger<AzureContentSafetyService>The logger for the Azure Content Safety service.
Methods
AnalyzeText(string)
Checks if a text is safe or not based on pre-configured content filters.
public Task<AnalyzeTextFilterResult> AnalyzeText(string content)
Parameters
content
stringThe text content that needs to be analyzed.
Returns
- Task<AnalyzeTextFilterResult>
The text analysis restult, which includes a boolean flag that represents if the content is considered safe. In case the content is unsafe, also returns the reason.
DetectPromptInjection(string)
Detects attempted prompt injections and jailbreaks in user prompts.
public Task<string?> DetectPromptInjection(string content)
Parameters
content
stringThe text content that needs to be analyzed.