--- license: apache-2.0 tags: - legal - chemistry - medical - text-generation-inference - art pipeline_tag: text-generation model-index: - name: Nidum-Limitless-Gemma-2B results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 24.24 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nidum/Nidum-Limitless-Gemma-2B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 3.45 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nidum/Nidum-Limitless-Gemma-2B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 0.0 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nidum/Nidum-Limitless-Gemma-2B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 1.9 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nidum/Nidum-Limitless-Gemma-2B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 4.12 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nidum/Nidum-Limitless-Gemma-2B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 1.93 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nidum/Nidum-Limitless-Gemma-2B name: Open LLM Leaderboard --- # Nidum-Limitless-Gemma-2B LLM Welcome to the repository for Nidum-Limitless-Gemma-2B, an advanced language model that provides unrestricted and versatile responses across a wide range of topics. Unlike conventional models, Nidum-Limitless-Gemma-2B is designed to handle any type of question and deliver comprehensive answers without content restrictions. ## Key Features: - **Unrestricted Responses:** Address any query with detailed, unrestricted responses, providing a broad spectrum of information and insights. - **Versatility:** Capable of engaging with a diverse range of topics, from complex scientific questions to casual conversation. - **Advanced Understanding:** Leverages a vast knowledge base to deliver contextually relevant and accurate outputs across various domains. - **Customizability:** Adaptable to specific user needs and preferences for different types of interactions. ## Use Cases: - Open-Ended Q&A - Creative Writing and Ideation - Research Assistance - Educational and Informational Queries - Casual Conversations and Entertainment ## How to Use: To get started with Nidum-Limitless-Gemma-2B, you can use the following sample code for testing: ```python import torch from transformers import pipeline pipe = pipeline( "text-generation", model="nidum/Nidum-Limitless-Gemma-2B", model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", # replace with "mps" to run on a Mac device ) messages = [ {"role": "user", "content": "who are you"}, ] outputs = pipe(messages, max_new_tokens=256) assistant_response = outputs[0]["generated_text"][-1]["content"].strip() print(assistant_response) ``` ## Release Date: Nidum-Limitless-Gemma-2B is now officially available. Explore its capabilities and experience the freedom of unrestricted responses. ## Contributing: We welcome contributions to enhance the model or expand its functionalities. Details on how to contribute will be available in the coming updates. ## Quantized Model Versions To accommodate different hardware configurations and performance needs, Nidum-Limitless-Gemma-2B-GGUF is available in multiple quantized versions: | Model Version | Description | |------------------------------------------------|-------------------------------------------------------| | **Nidum-Limitless-Gemma-2B-Q2_K.gguf** | Optimized for minimal memory usage with lower precision. Suitable for resource-constrained environments. | | **Nidum-Limitless-Gemma-2B-Q4_K_M.gguf** | Balances performance and precision, offering faster inference with moderate memory usage. | | **Nidum-Limitless-Gemma-2B-Q8_0.gguf** | Provides higher precision with increased memory usage, suitable for tasks requiring more accuracy. | | **Nidum-Limitless-Gemma-2B-F16.gguf** | Full 16-bit floating point precision for maximum accuracy, ideal for high-end GPUs. | It is available here: https://huggingface.co/nidum/Nidum-Limitless-Gemma-2B-GGUF ## Contact: For any inquiries or further information, please contact us at **info@nidum.ai**. --- Dive into limitless possibilities with Nidum-Limitless-Gemma-2B! Special Thanks to @cognitivecomputations for inspiring us and scouting the best datasets that we could round up to make a rockstar model for you --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nidum__Nidum-Limitless-Gemma-2B) | Metric |Value| |-------------------|----:| |Avg. | 5.94| |IFEval (0-Shot) |24.24| |BBH (3-Shot) | 3.45| |MATH Lvl 5 (4-Shot)| 0.00| |GPQA (0-shot) | 1.90| |MuSR (0-shot) | 4.12| |MMLU-PRO (5-shot) | 1.93|