Module redlite.model
Sub-modules
- redlite.model.anthropic_model
- redlite.model.aws_bedrock_model
- redlite.model.gemini_model
- redlite.model.hf_model
- redlite.model.llamacpp_model
- redlite.model.openai_model
Classes
CannedModel
class CannedModel(
response: str
)
Returns back the canned response, regardless of the input.
- response (
str): string to return (same for every request).
Ancestors (in MRO)
- redlite._core.NamedModel
ConvertSystemToUserModel
class ConvertSystemToUserModel(
model: redlite._core.NamedModel,
assistant_confirmation: str = 'OK'
)
Wraps a model and replaces system message with the user one.
Useful if underlying model was not trained with system message.
- model (
NamedModel): the model to wrap. - assistant_confirmation (
str): assistant message to use as a response to the generated user one. Optional, default is"OK".
As an example, the following code:
engine_model = ... # a model that does not accept "system" message
model = ConvertSystemToUserModel(engine_model, "Aye aye, Sir!")
and the following input:
[
{ "role": "system", "content": "You are useful and safe model" },
{ "role": "user", "content": "How to kill a process?" },
]
will make engine_model to receive the following converted prompt:
[
{ "role": "user", "content": "You are useful and safe model" },
{ "role": "assistant", "content": "Aye aye, Sir!" },
{ "role": "user", "content": "How to kill a process?" },
]
Ancestors (in MRO)
- redlite._core.NamedModel
IgnoreSystemModel
class IgnoreSystemModel(
model: redlite._core.NamedModel
)
Wraps a model and removes system message from the model input (if any).
Useful if underlying model was not trained with system message.
- model (
NamedModel): the model to wrap.
Ancestors (in MRO)
- redlite._core.NamedModel
MakeSystemModel
class MakeSystemModel(
model: redlite._core.NamedModel,
system_prompt: str
)
Wraps a model and inserts (or replaces existing) system message.
Useful to set system message when underlying dataset has none.
- model (
NamedModel): the model to wrap.
Ancestors (in MRO)
- redlite._core.NamedModel
ModerationModel
class ModerationModel(
model: redlite._core.NamedModel,
*,
api_key: str | None = None,
moderation_model: str = 'omni-moderation-latest',
refusal_message: str = 'I refuse to answer this question.',
threshold: float | dict[str, float] = 0.8
)
Wraps a model and filters conversation content using OpenAI Moderation API.
https://platform.openai.com/docs/guides/safety-best-practices#use-our-free-moderation-api
Before delegating to the inner model, all message contents are checked against OpenAI's moderation API. If any content is flagged as potentially harmful, the model returns a refusal message instead of processing the request.
This avoids having OpenAI account flagged for Usage Policy Violation
Requires OpenAI API key (via api_key parameter or OPENAI_API_KEY env var).
- model (
NamedModel): The model to wrap. - api_key (
str | None): OpenAI API key. Optional, defaults to None (will use OPENAI_API_KEY environment variable). - moderation_model (
str): Which OpenAI moderation model to use. Default is"omni-moderation-latest". - refusal_message (
str): Message to return when content is flagged. Default is"I refuse to answer this question.". - threshold (
float | dict[str, float]): Score threshold(s) for flagging content. If a float is provided, it is used as the threshold for all categories. If a dict is provided, it should map category names to threshold floats. Missing categories will default to threshold of1.0(never flag). Default is0.8. For list of valid category names see OpenAI Moderation API docs.
Note: If the moderation API fails (network error, rate limit, etc.), the wrapper will fail closed (return refusal) to maintain safety guarantees.
Ancestors (in MRO)
- redlite._core.NamedModel
ParrotModel
class ParrotModel(
)
Returns back last user message.
Ancestors (in MRO)
- redlite._core.NamedModel
RemoveThinking
class RemoveThinking(
model: redlite._core.NamedModel
)
Wraps a model and removes thinking text block from the model output (if any).
Useful if underlying model uses "reasoning" and includes "thinking trace" into its answer.
- model (
NamedModel): the model to wrap.
Ancestors (in MRO)
- redlite._core.NamedModel
ThrottleModel
class ThrottleModel(
model: redlite._core.NamedModel,
*,
calls_per_minute=60
)
Wraps a model and throttles model calls to the specified inteval.
- model (
NamedModel): the model to wrap. - calls_per_minute (
float): how many calls per minute are allowed. Decimal is allowed (0.5 calls per minute means 1 call every 2 minutes). Default is60.
Ancestors (in MRO)
- redlite._core.NamedModel