now.finetuning.settings module#

This module contains pre-configurations for finetuning on the demo datasets.

class now.finetuning.settings.FinetuneSettings(perform_finetuning, model_name, add_embeddings, bi_modal, loss, pre_trained_embedding_size=None, finetuned_model_artifact=None, token=None, hidden_sizes=(128,), batch_size=128, epochs=50, finetune_layer_size=128, train_val_split_ration=0.9, num_val_queries=50, eval_match_limit=20, num_items_per_class=4, learning_rate=0.0005, pos_mining_strat='hard', neg_mining_strat='hard', early_stopping_patience=5)[source]#

Bases: object

perform_finetuning: bool#
model_name: str#
add_embeddings: bool#
bi_modal: bool#
loss: str#
pre_trained_embedding_size: Optional[int] = None#
finetuned_model_artifact: Optional[str] = None#
token: Optional[str] = None#
hidden_sizes: Tuple[int] = (128,)#
batch_size: int = 128#
epochs: int = 50#
finetune_layer_size: int = 128#
train_val_split_ration: int = 0.9#
num_val_queries: int = 50#
eval_match_limit: int = 20#
num_items_per_class: int = 4#
learning_rate: int = 0.0005#
pos_mining_strat: str = 'hard'#
neg_mining_strat: str = 'hard'#
early_stopping_patience: int = 5#
now.finetuning.settings.parse_finetune_settings(user_input, dataset, model_name, loss, finetune_datasets=(), add_embeddings=True, pre_trained_embedding_size=None)[source]#

This function parses the user input configuration into the finetune settings

Return type

FinetuneSettings