This Jamba model has been pruned to just 1B parameters. It was then trained on the first 50k examples of the Ultra Interact Pair dataset for Instruction based fine-tuning.
Initial tests work but may be inconsistent. More info and examples will be posted later