| 일 | 월 | 화 | 수 | 목 | 금 | 토 |
|---|---|---|---|---|---|---|
| 1 | 2 | 3 | 4 | 5 | 6 | |
| 7 | 8 | 9 | 10 | 11 | 12 | 13 |
| 14 | 15 | 16 | 17 | 18 | 19 | 20 |
| 21 | 22 | 23 | 24 | 25 | 26 | 27 |
| 28 | 29 | 30 | 31 |
- word2vec
- 과학백과사전
- 방식으로 텍스트
- 게시판 만들기
- mysql
- Topics
- Websocket
- 지마켓
- 파이썬
- 이력서
- tomoto
- pytorch
- RESFUL
- 幼稚园杀手(유치원킬러)
- 토픽추출
- db
- oracle
- spring MVC(모델2)방식
- java
- Gmarket
- Python
- 크롤링
- (깃)git bash
- jsp 파일 설정
- lda
- 자바
- r
- 코사인 유사도
- test
- 네이버뉴스
- Today
- Total
목록Python (125)
무회 Blog
testBot001.py # import modules from telegram.ext import (Updater, CommandHandler, MessageHandler, Filters,) import logging from melon_rank import show_music_rank # print(show_music_rank()) print('kase',show_music_rank) # Enable logging logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.INFO) logger = logging.getLogger(__name__) #My bot token from Bot..
In [1]: # conda install -c conda-forge ipywidgets In [2]: from fast_bert.data_cls import BertDataBunch DATA_PATH = './../Downloads/fastai/fast-bert-1.8.0/sample_data/imdb_movie_reviews/data/' LABEL_PATH = './../Downloads/fastai/fast-bert-1.8.0/sample_data/imdb_movie_reviews/label/' OUTPUT_DIR = './../Downloads/fastai/fast-bert-1.8.0/sample_data/imdb_movie_reviews/output/'..
In [1]: import torch from transformers import BertForSequenceClassification model = BertForSequenceClassification.from_pretrained('bert-base-uncased') model.train() Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls...
In [4]: ## BertForMaskedLM from transformers import BertTokenizer, BertForMaskedLM import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForMaskedLM.from_pretrained('bert-base-uncased') input_ids = tokenizer("Hello, my dog is cute", return_tensors="pt")["input_ids"] # print(input_ids) outputs = model(input_ids, labels=input_ids) loss, prediction_scores = outputs..
In [6]: from __future__ import print_function import ipywidgets as widgets from transformers import pipeline print('success') success In [7]: nlp_sentence_classif = pipeline('sentiment-analysis') nlp_sentence_classif('Such a nice weather outside !') Out[7]: [{'label': 'POSITIVE', 'score': 0.9997655749320984}] In [12]: nlp_token_class = pipeline('ner') nlp_token_class('Hugging Face is a French co..