Edit model card

SetFit with mini1013/master_domain

This is a SetFit model that can be used for Text Classification. This SetFit model uses mini1013/master_domain as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
5
  • '(주)근호컴 [리버네트워크]USB 2.0 리피터 전용 전원 어댑터 (NX-USBEXPW) (주)근호컴'
  • 'NEXI 넥시 정품 NX-USBEXPW아답터 (NX0284) (주)유니정보통신'
  • '국산 12V 5A 모니터 아답터 ML-125A 헤라유통'
3
  • '카멜마운트 GDA3 고든 디자인 모니터 거치대 모니터암 듀얼 블랙 주식회사 카멜인터내셔널'
  • '카멜 CA2 화이트 나뭉'
  • '마루느루 마운트뷰 MV-G1A 셜크'
0
  • '셋탑 박스 게임기 리모컨 수납 TV 모니터 TOP 공간 선반 공유기 거치대 아이디어윙'
  • '리모컨수납 TV 모니터 TOP 공간선반 Black 연상연하'
  • '애니포트 TV거치대 엘마운트 다용도 멀티 선반 S900 이스토어'
1
  • 'ELLOVEN 엘로벤 모니터스탠드+서랍 엘로벤 스탠드 앤트러 (804.851.02) 랩앤툴스'
  • '썬엔원 유보드 모니터받침대 U-BOARD Basic [화이트] 강화유리 / 유리색상: 투명 블랙 (주)세븐앤씨'
  • '앱코 MES100 사이드 폴딩 모니터 받침대 선반 받침 서랍 데스크 정리 블랙 앱코 MES100 블랙 (주)드림팩토리샵'
2
  • '아이존아이앤디 EZ MSM-10 아이러브드라이브(I Love Drive)'
  • '아이존아이앤디 EZ MSM-10/EZ MSM-10/조절브라켓/모니터스탠드/높낮이조절/조절스탠드/모니터홀타입/홀타입스탠드 EZ MSM-10 기쁘다희샵'
  • '루나랩 베사확장브라켓 200x100 200x200 주식회사 루나'
4
  • '지클릭커 휴 쉴드 PET 부착식 정보보호 모니터 보안필름 22인치 가이드컴퓨터'
  • '힐링쉴드 11890340 22인치 모니터 블루라이트차단 보호필름 거치식 조립형 양면필터 온라인정품인증점'
  • '지클릭커 휴 쉴드 PET 부착식 정보보호 모니터 보안필름 22인치 주식회사 리더샵'

Evaluation

Metrics

Label Metric
all 0.8586

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_el10")
# Run inference
preds = model("원목 듀얼 모니터받침대 미송 B타입 M  주식회사 제이테크(J-TECH)")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 4 9.9725 24
Label Training Sample Count
0 50
1 50
2 13
3 50
4 5
5 50

Training Hyperparameters

  • batch_size: (512, 512)
  • num_epochs: (20, 20)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 40
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0286 1 0.4958 -
1.4286 50 0.0386 -
2.8571 100 0.0016 -
4.2857 150 0.0001 -
5.7143 200 0.0 -
7.1429 250 0.0 -
8.5714 300 0.0 -
10.0 350 0.0 -
11.4286 400 0.0001 -
12.8571 450 0.0 -
14.2857 500 0.0001 -
15.7143 550 0.0 -
17.1429 600 0.0001 -
18.5714 650 0.0 -
20.0 700 0.0 -

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.1.0.dev0
  • Sentence Transformers: 3.1.1
  • Transformers: 4.46.1
  • PyTorch: 2.4.0+cu121
  • Datasets: 2.20.0
  • Tokenizers: 0.20.0

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
1,026
Safetensors
Model size
111M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mini1013/master_cate_el10

Base model

klue/roberta-base
Finetuned
(22)
this model

Evaluation results