Open WebUI Function: https://openwebui.com/f/quaz93/imagine_phi

This Phi-4 model is part of a debugging project called Micro-Dose. My goal was to create a system that could trace and debug the model's reasoning and thought processes without relying on a large dataset. I discovered this was possible with a tiny dataset of just 90 rows, specifically designed as math word problems. In the initial iterations, the system only exposed reasoning patterns when a math-related question was asked. I then made a few changes to the debugging structure, including the order of information and naming conventions. These adjustments introduced several specialized components:

Imagine: This component reveals the model's initial conceptualization of the question, helping debug how the model interprets user intent and frames the problem space. Like Subjects: Shows related topics the model considers relevant, exposing how it creates conceptual connections across domains. Associated Thoughts: Displays secondary connections and implications the model is considering, useful for debugging conceptual linking. Dialectical Analysis: Reveals how the model evaluates opposing viewpoints or contradictions, helping debug the balance in its reasoning. Backwards Induction: Shows how the model works backward from desired outcomes to current states, useful for debugging goal-oriented reasoning. Metacognition: Exposes the model's self-reflection on its own reasoning process, helping debug awareness of limitations and confidence. Possible Secondary Solving Method: Reveals alternative approaches the model considered but didn't use as its primary method, useful for debugging solution diversity. Begin of Thought/End of Thought: Marks the model's detailed step-by-step reasoning process, critical for debugging logical flows. Begin of Solution/End of Solution: Contains the model's final answer after considering all aspects, useful for comparing against the thought process.

What led me to this approach was my desire to refine my model debugging methodology. Moving forward, I aim to prioritize analyzing essential reasoning patterns first, followed by a layer that exposes higher-level cognitive processes. The debug mode is usually activated with prompts like "Use complex terms to explain this" or "Explain this to me in a complex way" or "Explain this with some complexity". The debug structure took about 15-25 minutes to implement on a fresh Phi4 base model (5 epochs, total of 110 steps). I initially thought the system would be too specialized and not work with general queries. Surprisingly, it generalizes well across topics, which is unexpected as similar debugging approaches with other datasets yielded no successful results. Debugging Tags: image/png

Normal Thought Process: image/png

Model in use with a Web Search in Open WebUI image/png

Downloads last month
47
GGUF
Model size
14.7B params
Architecture
llama

4-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for Quazim0t0/Imagine-Phi-v0.2-GGUF

Base model

microsoft/phi-4
Quantized
(52)
this model

Collection including Quazim0t0/Imagine-Phi-v0.2-GGUF