SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("datasocietyco/bge-base-en-v1.5-course-recommender-v4python")
# Run inference
sentences = [
"Foundations of Probability Theory in Python. This course guides learners through a comprehensive review of advanced statistics topics on probability, such as permutations and combinations, joint probability, conditional probability, and marginal probability. Learners will also become familiar with Bayes’ theorem, a rule that provides a way to calculate the probability of a cause given its outcome. By the end of this course, learners will also be able to assess the likelihood of events being independent to indicate whether further statistical analysis is likely to yield results.. tags: conditional probability, bayes' theorem. Languages: Course language: Python. Prerequisites: Prerequisite course required: Hypothesis Testing in Python. Target audience: Professionals some Python experience who would like to expand their skill set to more advanced Python visualization techniques and tools..",
"Course Name:Foundations of Probability Theory in Python|Course Description:This course guides learners through a comprehensive review of advanced statistics topics on probability, such as permutations and combinations, joint probability, conditional probability, and marginal probability. Learners will also become familiar with Bayes’ theorem, a rule that provides a way to calculate the probability of a cause given its outcome. By the end of this course, learners will also be able to assess the likelihood of events being independent to indicate whether further statistical analysis is likely to yield results.|Tags:conditional probability, bayes' theorem|Course language: Python|Target Audience:Professionals some Python experience who would like to expand their skill set to more advanced Python visualization techniques and tools.|Prerequisite course required: Hypothesis Testing in Python",
'Course Name:Foundations of Generative AI|Course Description:Foundations of Generative AI|Tags:Foundations, Generative, AI|Course language: None|Target Audience:No target audience|No prerequisite course required',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
Unnamed Dataset
- Size: 48 training samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 48 samples:
anchor positive type string string details - min: 49 tokens
- mean: 188.12 tokens
- max: 322 tokens
- min: 47 tokens
- mean: 186.12 tokens
- max: 320 tokens
- Samples:
anchor positive Outlier Detection with DBSCAN in Python. Density-Based Spatial Clustering of Applications with Noise, or DBSCAN, contrasts groups of densely-packed data with points isolated in low-density regions. In this course, learners will discuss the optimal data conditions suited to this method of outlier detection. After discussing different basic varieties of anomaly detection, learners will implement DBSCAN to identify likely outliers. They will also use a balancing method called Synthetic Minority Oversampling Technique, or SMOTE, to generate additional examples of outliers and improve the anomaly detection model.. tags: outlier, SMOTE, anomaly, DBSCAN. Languages: Course language: Python. Prerequisites: Prerequisite course required: Intro to Clustering. Target audience: Professionals with some Python experience who would like to expand their skills to learn about various outlier detection techniques.
Course Name:Outlier Detection with DBSCAN in Python
Foundations of Python. This course introduces learners to the fundamentals of the Python programming language. Python is one of the most widely used computer languages in the world, helpful for building web-based applications, performing data analysis, and automating tasks. By the end of this course, learners will identify how data scientists use Python, distinguish among basic data types and data structures, and perform simple arithmetic and variable-related tasks.. tags: functions, basics, data-structures, control-flow. Languages: Course language: Python. Prerequisites: Prerequisite course required: Version Control with Git. Target audience: This is an introductory level course for data scientists who want to learn basics of Python and implement different data manipulation techniques using popular data wrangling Python libraries..
Course Name:Foundations of Python
Text Generation with LLMs in Python. This course provides a practical introduction to the latest advancements in generative AI with a focus on text. To start, the course explores the use of reinforcement learning in natural language processing (NLP). Learners will delve into approaches for conversational and question-answering (QA) tasks, highlighting the capabilities, limitations, and use cases of models available in the Hugging Face library, such as Dolly v2. Finally, learners will gain hands-on experience in creating their own chatbot by using the concepts of Retrieval Augmented Generation (RAG) in LlamaIndex.. tags: course, provides, practical, introduction, latest, advancements, generative, AI, focus, text., start,, course, explores, use, reinforcement, learning, natural, language, processing, (NLP)., Learners, will, delve, into, approaches, conversational, question-answering, (QA), tasks,, highlighting, capabilities,, limitations,, use, cases, models, available, Hugging, Face, library,, such, as, Dolly, v2., Finally,, learners, will, gain, hands-on, experience, creating, their, own, chatbot, using, concepts, Retrieval, Augmented, Generation, (RAG), LlamaIndex.. Languages: Course language: None. Prerequisites: No prerequisite course required. Target audience: No target audience.
Course Name:Text Generation with LLMs in Python
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Evaluation Dataset
Unnamed Dataset
- Size: 12 evaluation samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 12 samples:
anchor positive type string string details - min: 46 tokens
- mean: 162.92 tokens
- max: 363 tokens
- min: 44 tokens
- mean: 160.92 tokens
- max: 361 tokens
- Samples:
anchor positive Fundamentals of Deep Learning for Multi GPUs. Find out how to use multiple GPUs to train neural networks and effectively parallelize\ntraining of deep neural networks using TensorFlow.. tags: multiple GPUs, neural networks, TensorFlow, parallelize. Languages: Course language: Python. Prerequisites: No prerequisite course required. Target audience: Professionals want to train deep neural networks on multi-GPU technology to shorten\nthe training time required for data-intensive applications.
Course Name:Fundamentals of Deep Learning for Multi GPUs
Building Transformer-Based NLP Applications (NVIDIA). Learn how to apply and fine-tune a Transformer-based Deep Learning model to Natural Language Processing (NLP) tasks. In this course, you'll construct a Transformer neural network in PyTorch, Build a named-entity recognition (NER) application with BERT, Deploy the NER application with ONNX and TensorRT to a Triton inference server. Upon completion, you’ll be proficient i.n task-agnostic applications of Transformer-based models. Data Society's instructors are certified by NVIDIA’s Deep Learning Institute to teach this course.. tags: named-entity recognition, text, Natural language processing, classification, NLP, NER. Languages: Course language: Python. Prerequisites: No prerequisite course required. Target audience: Professionals with basic knowledge of neural networks and want to expand their knowledge in the world of Natural langauge processing.
Course Name:Building Transformer-Based NLP Applications (NVIDIA)
Nonlinear Regression in Python. In this course, learners will practice implementing a variety of nonlinear regression techniques in Python to model complex relationships beyond simple linear patterns. They will learn to interpret key transformations, including logarithmic (log-log, log-linear) and polynomial models, and identify interaction effects between predictor variables. Through hands-on exercises, they will also develop practical skills in selecting, fitting, and validating the most appropriate nonlinear model for their data.. tags: nonlinear, regression. Languages: Course language: Python. Prerequisites: Prerequisite course required: Multiple Linear Regression. Target audience: This is an intermediate level course for data scientists who want to learn to understand and estimate relationships between a set of independent variables and a continuous dependent variable..
Course Name:Nonlinear Regression in Python
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16learning_rate
: 3e-06max_steps
: 24warmup_ratio
: 0.1batch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 3e-06weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 3.0max_steps
: 24lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | loss |
---|---|---|---|
6.6667 | 20 | 0.046 | 0.0188 |
Framework Versions
- Python: 3.9.13
- Sentence Transformers: 3.1.1
- Transformers: 4.45.1
- PyTorch: 2.2.2
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.20.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for datasocietyco/bge-base-en-v1.5-course-recommender-v4python
Base model
BAAI/bge-base-en-v1.5