Interface IContentSafetyService
- Namespace
- FoundationaLLM.Common.Interfaces
- Assembly
- FoundationaLLM.Common.dll
Interface for calling a content safety service.
public interface IContentSafetyService
- Extension Methods
Methods
AnalyzeText(string)
Checks if a text is safe or not based on pre-configured content filters.
Task<ContentSafetyAnalysisResult> AnalyzeText(string content)
Parameters
contentstringThe text content that needs to be analyzed.
Returns
- Task<ContentSafetyAnalysisResult>
The text analysis restult, which includes a boolean flag that represents if the content is considered safe. In case the content is unsafe, also returns the reason.
DetectPromptInjection(string)
Detects attempted prompt injections and jailbreaks in user prompts.
Task<ContentSafetyAnalysisResult> DetectPromptInjection(string content)
Parameters
contentstringThe text content that needs to be analyzed.
Returns
- Task<ContentSafetyAnalysisResult>
The text analysis restult, which includes a flag indicating whether the content contains prompt injections or jailbreaks.
DetectPromptInjection(string, IEnumerable<ContentSafetyDocument>, CancellationToken)
Analyzes a collection of documents to detect potential prompt injection attacks.
Task<ContentSafetyDocumentAnalysisResult> DetectPromptInjection(string context, IEnumerable<ContentSafetyDocument> documents, CancellationToken cancellationToken)
Parameters
contextstringThe context in which the documents are being analyzed.
documentsIEnumerable<ContentSafetyDocument>An enumerable collection of documents to analyze for prompt injection.
cancellationTokenCancellationTokenA token to monitor for cancellation requests.
Returns
- Task<ContentSafetyDocumentAnalysisResult>
A task that represents the asynchronous operation. The task result contains the analysis results for the documents.