about
๐Ÿ‘‹

about

Junbum Lee / ์ด์ค€๋ฒ”

AI/NLP Researcher
๐Ÿ’Œ MailTo: jun@beomi.net (or beomi@snu.ac.kr)
๐Ÿ–ฅ Github: https://github.com/beomi
Last update @ Mar, 2023

Publications

[Journalism] News comment sections and online echo chambers: The ideological alignment between partisan news stories and their user comments

Abstract
This study explored the presence of digital echo chambers in the realm of partisan mediaโ€™s news comment sections in South Korea. We analyzed the political slant of 152 K user comments written by 76 K unique contributors on NAVER, the countryโ€™s most popular news aggregator. We found that the political slant of the average user comments to be in alignment with the political leaning of the conservative news outlets; however, this was not true of the progressive media. A considerable number of comment contributors made a crossover from like-minded to cross-cutting partisan media and argued with their political opponents. The majority of these crossover commenters were โ€œheadstrong ideologues,โ€ followed by โ€œflip-floppersโ€ and โ€œopponents.โ€ The implications of the present study are discussed in light of the potential for the news comment sections to be the digital cafรฉs of Public Sphere 2.0 rather than echo chambers.

[HCLT 2020] KcBERT: ํ•œ๊ตญ์–ด ๋Œ“๊ธ€๋กœ ํ•™์Šตํ•œ BERT

Abstract
์ตœ๊ทผ ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ์—์„œ๋Š” ์‚ฌ์ „ ํ•™์Šต๊ณผ ์ „์ด ํ•™์Šต์„ ํ†ตํ•˜์—ฌ ๋‹ค์–‘ํ•œ ๊ณผ์ œ์— ๋†’์€ ์„ฑ๋Šฅ ํ–ฅ์ƒ์„ ์„ฑ์ทจํ•˜๊ณ  ์žˆ๋‹ค. ์‚ฌ์ „ ํ•™์Šต์˜ ๋Œ€ํ‘œ์  ๋ชจ๋ธ๋กœ ๊ตฌ๊ธ€์˜ BERT๊ฐ€ ์žˆ์œผ๋ฉฐ, ๊ตฌ๊ธ€์—์„œ ์ œ๊ณตํ•œ ๋‹ค๊ตญ์–ด ๋ชจ๋ธ์„ ํฌํ•จํ•ด ํ•œ๊ตญ์˜ ์—ฌ๋Ÿฌ ์—ฐ๊ตฌ๊ธฐ๊ด€๊ณผ ๊ธฐ์—…์—์„œ ํ•œ๊ตญ์–ด ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ ํ•™์Šตํ•œ BERT ๋ชจ๋ธ์„ ์ œ๊ณตํ•˜๊ณ  ์žˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด๋Ÿฐ BERT ๋ชจ๋ธ๋“ค์€ ์‚ฌ์ „ ํ•™์Šต์— ์‚ฌ์šฉํ•œ ๋ง๋ญ‰์น˜์˜ ํŠน์„ฑ์— ๋”ฐ๋ผ ์ดํ›„ ์ „์ด ํ•™์Šต์—์„œ์˜ ์„ฑ๋Šฅ ์ฐจ์ด๊ฐ€ ๋ฐœ์ƒํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์†Œ์…œ๋ฏธ๋””์–ด์—์„œ ๋‚˜ํƒ€๋‚˜๋Š” ๊ตฌ์–ด์ฒด์™€ ์‹ ์กฐ์–ด, ํŠน์ˆ˜๋ฌธ์ž, ์ด๋ชจ์ง€ ๋“ฑ ์ผ๋ฐ˜ ์‚ฌ์šฉ์ž๋“ค์˜ ๋ฌธ์žฅ์— ๋ณด๋‹ค ์œ ์—ฐํ•˜๊ฒŒ ๋Œ€์‘ํ•  ์ˆ˜ ์žˆ๋Š” ํ•œ๊ตญ์–ด ๋‰ด์Šค ๋Œ“๊ธ€ ๋ฐ์ดํ„ฐ๋ฅผ ํ†ตํ•ด ํ•™์Šตํ•œ KcBERT๋ฅผ ์†Œ๊ฐœํ•œ๋‹ค. ๋ณธ ๋ชจ๋ธ์€ ์ตœ์†Œํ•œ์˜ ๋ฐ์ดํ„ฐ ์ •์ œ ์ดํ›„ BERT WordPiece ํ† ํฌ๋‚˜์ด์ €๋ฅผ ํ•™์Šตํ•˜๊ณ , BERT Base๋ชจ๋ธ๊ณผ BERT Large ๋ชจ๋ธ์„ ๋ชจ๋‘ ํ•™์Šตํ•˜์˜€๋‹ค. ๋˜ํ•œ, ํ•™์Šต๋œ ๋ชจ๋ธ์„ HuggingFace Model Hub์— ๊ณต๊ฐœํ•˜์˜€๋‹ค. KcBERT๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์ „์ด ํ•™์Šต์„ ํ†ตํ•ด ํ•œ๊ตญ์–ด ๋ฐ์ดํ„ฐ์…‹์— ์ ์šฉํ•œ ์„ฑ๋Šฅ์„ ๋น„๊ตํ•œ ๊ฒฐ๊ณผ, ํ•œ๊ตญ์–ด ์˜ํ™” ๋ฆฌ๋ทฐ ์ฝ”ํผ์Šค(NSMC)์—์„œ ์ตœ๊ณ  ์„ฑ๋Šฅ์˜ ์Šค์ฝ”์–ด๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์—ˆ์œผ๋ฉฐ, ์—ฌํƒ€ ๋ฐ์ดํ„ฐ์…‹์—์„œ๋Š” ๊ธฐ์กด ํ•œ๊ตญ์–ด BERT ๋ชจ๋ธ๊ณผ ๋น„์Šทํ•œ ์ˆ˜์ค€์˜ ์„ฑ๋Šฅ์„ ๋ณด์˜€๋‹ค.

[IC2S2 2020] Anxiety vs. Anger inducing Social Messages: A Case Study of the Fukushima Nuclear Disaster

[ACL 2020 SocialNLP] BEEP! Korean Corpus of Online News Comments for Toxic Speech Detection

Abstract
Toxic comments in online platforms are an unavoidable social issue under the cloak of anonymity. Hate speech detection has been actively done for languages such as English, German, or Italian, where manually labeled corpus has been released. In this work, we first present 9.4K manually labeled entertainment news comments for identifying Korean toxic speech, collected from a widely used online news platform in Korea. The comments are annotated regarding social bias and hate speech since both aspects are correlated. The inter-annotator agreement Krippendorff's alpha score is 0.492 and 0.496, respectively. We provide benchmarks using CharCNN, BiLSTM, and BERT, where BERT achieves the highest score on all tasks. The models generally display better performance on bias identification, since the hate speech detection is a more subjective issue. Additionally, when BERT is trained with bias label for hate speech detection, the prediction score increases, implying that bias and hate are intertwined. We make our dataset publicly available and open competitions with the corpus and benchmarks.
ย 

[EMNLP 2019 W-NUT] The Fallacy of Echo Chambers: Analyzing the Political Slants of User-Generated News Comments in Korean Media

Abstract
This study analyzes the political slants of user comments on Korean partisan media. We built a BERT-based classifier to detect political leaning of short comments via the use of semi-unsupervised deep learning methods that produced an F1 score of 0.83. As a result of classifying 27.1K comments, we found the high presence of conservative bias on both conservative and liberal news outlets. Moreover, this study discloses a considerable overlap of commenters across the partisan spectrum such that the majority of liberals (88.8%) and conservatives (63.7%) comment not only on news stories resonating with their political perspectives but also on those challenging their viewpoints. These findings advance the current understanding of online echo chambers.

Career

DataDriven (2022.01. ~)

AI/NLP Researcher
  • ํ•™์ƒ ์—ญ๋Ÿ‰ ๊ธฐ๋ฐ˜ Generation Model ๊ฐœ๋ฐœ
  • ์ง„๋กœํ†กํ†ก: ์ฒญ์†Œ๋…„ AI ์ง„๋กœ ์ƒ๋‹ด ์„œ๋น„์Šค ๋ชจ๋ธ ๊ฐœ๋ฐœ

NAVER (2020.07. ~ 2020.12.)

CLOVA Research Intern
  • ๋„ค์ด๋ฒ„ ํด๋ฆฐ๋ด‡ Transformers ๊ณ„์—ด ๋ชจ๋ธ๋ง
    • KcBERT ๊ธฐ๋ฐ˜ Classifier
  • ํ•œ๊ตญ์–ด Large Language Model (GPT-3, HyperClova)

KAIST DSLAB (2019.07. ~ 2019.08.)

Summer Internship
  • The Fallacy of Echo Chambers
    • ๋„ค์ด๋ฒ„ ๋‰ด์Šค์™€ ๋Œ“๊ธ€ ๋ฐ์ดํ„ฐ์—์„œ ๋‚˜ํƒ€๋‚˜๋Š” ์–ธ๋ก  ๋ฐ ์‚ฌ์šฉ์ž๋“ค์˜ ์ •์น˜์  ํŽธํ–ฅ์„ฑ์˜ ๋ถ„ํฌ ์—ฐ๊ตฌ ํ”„๋กœ์ ํŠธ
      • ๋‰ด์Šค ํƒ€์ดํ‹€/๋ณธ๋ฌธ ๊ธฐ๋ฐ˜ ์–ธ๋ก ์‚ฌ๋“ค์˜ ์ •์น˜์  ํŽธํ–ฅ์„ฑ ๋ถ„์„
      • ๋Œ“๊ธ€ ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ ๋ถ„์„๊ณผ ์œ ์ € ์ •๋ณด๋ฅผ ํ†ตํ•ด ๋ฐ์ดํ„ฐ ์ฆํญ ํ›„ ์ •์น˜์  ํŽธํ–ฅ์„ฑ ๋ถ„์„
  • Twitter Fukushima Rumor/FakeNews Diffusion Pattern Analysis
    • ํ›„์ฟ ์‹œ๋งˆ ์›์ „ ์‚ฌํƒœ์™€ ๊ด€๋ จํ•ด ํŠธ์œ„ํ„ฐ์—์„œ ๋‚˜ํƒ€๋‚˜๋Š” ์ •์ƒ/๋ฃจ๋จธ ๋“ฑ์˜ RT ํŒจํ„ด ๋ถ„์„ ๋ฐ Classifer ์ œ์ž‘ ํ”„๋กœ์ ํŠธ
      • Inbound/Outbound ์—ฐ๊ฒฐ์„ ํ†ตํ•ด RT ํ™•์‚ฐ ๋„คํŠธ์›Œํฌ ํŒจํ„ด ๋ถ„์„
      ย 

NEXON Korea (2017.10. ~ 2019.02.)

์ธํ…”๋ฆฌ์ „์Šค๋žฉ์Šค ์–ด๋ทฐ์ง•ํƒ์ง€ํŒ€ SW Engineer
  • Live(Game) Bot Detection
    • ๊ฒŒ์ž„ ๋‚ด ์ž‘์—…์žฅ ํ˜น์€ ํ•ต๊ณผ ๊ฐ™์€ ๋ถˆ๋ฒ• ํ”„๋กœ๊ทธ๋žจ์„ ์ด์šฉํ•œ ๊ณ„์ •์„ ํƒ์ง€ํ•ด ๋ณด์—ฌ์ฃผ๋Š” ์„œ๋น„์Šค
      • ๋ฐ์ดํ„ฐ ๋ถ„์„ ๋ชจ๋ธ ๊ฐœ๋ฐœ (with Pyspark)
      • ๋ถ„์„ ๊ฒฐ๊ณผ ๋Œ€์‹œ๋ณด๋“œ ๊ฐœ๋ฐœ (with Django/Vue)
      • Docker ๊ธฐ๋ฐ˜ ๊ฐœ๋ฐœ ๋ฐ ๋ฐฐํฌ (with AWS ECR)
  • ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ์„œ๋ฒ„๋ฆฌ์Šค ์„œ๋“ ์–ดํƒ(๊ฒŒ์ž„) ์›”ํ•ต ํƒ์ง€ ์„œ๋น„์Šค
    • FPS ๊ฒŒ์ž„ ์ด๋ฏธ์ง€ ๊ธฐ๋ฐ˜ ๋ถˆ๋ฒ• ํ”„๋กœ๊ทธ๋žจ ํƒ์ง€ ์„œ๋น„์Šค
      • ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ ์„œ๋ฒ„๋ฆฌ์Šค ์ถ”๋ก  Data flow ๊ตฌ์„ฑ
      • ์‹ค์‹œ๊ฐ„ Inference ๊ฒฐ๊ณผ ๋Œ€์‹œ๋ณด๋“œ ๊ฐœ๋ฐœ
  • ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ์„œ๋ฒ„๋ฆฌ์Šค ์š•์„ค ํƒ์ง€ ์„œ๋น„์Šค
    • ์š•์„ค ๋ฐ์ดํ„ฐ ํƒ์ง€๊ธฐ๋ฅผ ์„œ๋ฒ„๋ฆฌ์Šค API๋กœ ๊ตฌ์„ฑ
      • ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ ์„œ๋ฒ„๋ฆฌ์Šค ์ถ”๋ก  Data flow ๊ตฌ์„ฑ
      • Batch Inference ์„œ๋น„์Šค ํŽ˜์ด์ง€ ๊ฐœ๋ฐœ

์šฐ์•„ํ•œํ˜•์ œ๋“ค (2017.07. ~ 2017.08.)

์šฐ์•„ํ•œํ…Œํฌ์บ ํ”„ 1๊ธฐ ์ธํ„ด, Web Frontend track

Academic

์„œ์šธ๋Œ€ (2020.03. ~ 2022.02.)

๋ฐ์ดํ„ฐ์‚ฌ์ด์–ธ์Šค ์„์‚ฌ

์„œ์šธ๊ต๋Œ€ (2015.03. ~ 2020.02.)

์ดˆ๋“ฑ๊ต์œก ์ „๊ณต, ์ปดํ“จํ„ฐ๊ต์œก ์‹ฌํ™”์ „๊ณต

Opensource Projects

๐Ÿ‘ KoAlpaca: Korean Alpaca Model based on Stanford Alpaca (feat. LLAMA and Polyglot-ko)

Stanford Alpaca ๋ชจ๋ธ์„ ํ•™์Šตํ•œ ๋ฐฉ์‹๊ณผ ๋™์ผํ•œ ๋ฐฉ์‹์œผ๋กœ ํ•™์Šต์„ ์ง„ํ–‰ํ•œ, ํ•œ๊ตญ์–ด Alpaca ๋ชจ๋ธ

๐Ÿ” ์šฐ๋ฆฌ๊ฐ€ ์ฝ์„ ๋…ผ๋ฌธ์„ ์ฐพ์•„์„œ, Cite.GG

READ ME!

๋น„์Šทํ•œ ๋…ผ๋ฌธ ์ถ”์ฒœ?

Google Scholar๋‚˜ Semantic Scholar, ํ˜น์€ ๊ทธ ์™ธ์— ์—ฌ๋Ÿฌ๊ฐ€์ง€ ๋…ผ๋ฌธ ๊ฒ€์ƒ‰ ์„œ๋น„์Šค๋“ค์—์„œ๋Š” ์šฐ๋ฆฌ๊ฐ€ ๊ฒ€์ƒ‰ํ•œ/์ €์žฅํ•œ ๋…ผ๋ฌธ์„ ๊ธฐ์ค€์œผ๋กœ ์šฐ๋ฆฌ๊ฐ€ ๊ด€์‹ฌ๊ฐ€์งˆ๋งŒํ•œ ๋…ผ๋ฌธ์„ ์ถ”์ฒœํ•ด์ค๋‹ˆ๋‹ค.
์ด ์ถ”์ฒœ์„ ์œ„ํ•ด์„œ ์ˆ˜๋งŽ์€ ์•Œ๊ณ ๋ฆฌ์ฆ˜๊ณผ, ์ตœ๊ทผ์—๋Š” ๋”ฅ๋Ÿฌ๋‹์„ ์‚ฌ์šฉํ•ด ์ถ”์ฒœ์„ ํ•ด์ฃผ๋Š” ์‹œ์Šคํ…œ๋„ ๋‚˜์˜ค๊ธฐ๋„ ํ–ˆ์Šต๋‹ˆ๋‹ค.
ํ•œํŽธ, ๊ฐ€์žฅ ๊ธฐ๋ณธ์ ์ด์ง€๋งŒ ์ง๊ด€์ ์ธ,ย "๊ทธ๋ž˜์„œ, ๋‹ค๋“ค ์ธ์šฉํ•˜๋Š”, ๋‹ค๋“ค ์ฝ์—ˆ์ง€๋งŒ ๋‚˜๋งŒ ์•ˆ์ฝ์—ˆ์ง€๋งŒ ๊ผญ ์ฝ์–ด์•ผ ํ•˜๋Š” ๋…ผ๋ฌธ์€ ์–ด๋–ค ๋…ผ๋ฌธ์ธ๋ฐ?"ย ๋ผ๋Š” ๋ฌธ์ œ์— ๋Œ€ํ•œ ๋‹ต์„ ํ•˜๋Š” ์„œ๋น„์Šค๋Š” ๋”ฑํžˆ ์—†์–ด๋ณด์ด๋”๊ตฐ์š”. (์žˆ๋Š”๋ฐ ์ €๋งŒ ๋ชจ๋ฅผ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค๐Ÿ˜…)
๊ทธ๋ž˜์„œ, ์œ„ ๋ฌธ์ œ์— ๋Œ€ํ•œ ๋‹ต์„ ์‹ฌํ”Œํ•˜๊ฒŒ ์ฐพ์•„๋ณด๊ณ ์ž ํ–ˆ์Šต๋‹ˆ๋‹ค.

๋‚ด๊ฐ€ ์ง€๊ธˆ ์ฝ๋Š” ๋…ผ๋ฌธ๊ณผ ๋น„์Šทํ•œ ๋…ผ๋ฌธ๋“ค์ด ๊ณตํ†ต์ ์œผ๋กœ ์ธ์šฉํ•œ ๋…ผ๋ฌธ์€?

(์–ด๋–ป๊ฒŒ๋“  ๊ตฌ๊ธ€ ์Šค์ฝœ๋ผ์—์„œ ํ‚ค์›Œ๋“œ๋กœ ๊ฒ€์ƒ‰ํ•ด ์–ด๋–ค ๋…ผ๋ฌธ์„ ์ฐพ์•„์„œ) ์ฝ๊ณ ์žˆ๋Š” ๋…ผ๋ฌธ์ด ์žˆ๋‹ค๋ฉด..
  • ์ด ๋…ผ๋ฌธ์„ ์ธ์šฉํ•œ ๋…ผ๋ฌธ๋“ค์ด ์žˆ๊ฒ ์ง€?
  • ์ด ๋…ผ๋ฌธ์„ ์ธ์šฉํ•œ ๋…ผ๋ฌธ๋“ค์ดย ๊ณตํ†ต์ ์œผ๋กœ ์ธ์šฉํ•œ ๋…ผ๋ฌธ๋“ค์ด ์žˆ๊ฒ ์ง€!
  • ๊ณตํ†ต์ ์œผ๋กœ ์ธ์šฉ๋œ ๋…ผ๋ฌธ๋“ค์˜ ์ธ์šฉ ํšŸ์ˆ˜๋ณ„๋กœ ์ •๋ ฌํ•ด๋ณด์ž!
  • ๋ผ๋Š” ์•„์ด๋””์–ด๋ฅผ ๊ตฌํ˜„ํ•œ ์„œ๋น„์Šค ์ž…๋‹ˆ๋‹ค.

KcBERT: Korean comments BERT

๐Ÿค— Pretrained BERT model & WordPiece tokenizer trained on Korean Comments ํ•œ๊ตญ์–ด ๋Œ“๊ธ€๋กœ ํ”„๋ฆฌํŠธ๋ ˆ์ด๋‹ํ•œ BERT ๋ชจ๋ธ

KcELECTRA: Korean comments ELECTRA

๐Ÿค— Korean Comments ELECTRA: ํ•œ๊ตญ์–ด ๋Œ“๊ธ€๋กœ ํ•™์Šตํ•œ ELECTRA ๋ชจ๋ธ

Personal Interest

NLP / Social Data Analysis / Data Mining

Conference presentation

  • ์“ธ๋ฐ๋งŽ์€ ์›น ํฌ๋กค๋Ÿฌ ๋งŒ๋“ค๊ธฐ with Python @ GDG Campus Summer Party 2017

Data Engineering

Dev Conference presentation

Cloud, Automation, Scaling, ServerLess

OpenSource Projects

Etc.

[๊ตญ๋ฏผ๋Œ€ํ•™๊ต] ๋น„์ „๊ณต์ž๋ฅผ ์œ„ํ•œ ํŒŒ์ด์ฌ ๊ฐ•์˜ (2018.12)
๊ตญ๋ฏผ๋Œ€ํ•™๊ต ๋น„์ „๊ณต์ž ํ•™์ƒ๋“ค์„ ์œ„ํ•œ ํŒŒ์ด์ฌ ์ž…๋ฌธ ๊ฐ•์˜๋กœ, ํŒŒ์ด์ฌ ๊ธฐ์ดˆ๋ถ€ํ„ฐ Pandas๋ฅผ ์ด์šฉํ•œ ๊ธฐ์ดˆ์ ์ธ ๋ถ„์„ ๋ฐฉ๋ฒ•์„ ์ตํžŒ ํ›„ Kaggle Tutorial์„ ์ง„ํ–‰ํ•ด๋ณธ ๊ฐ•์˜.
[ํŒจ์ŠคํŠธ์บ ํผ์Šค] ํŒŒ์ด์ฌ์„ ํ™œ์šฉํ•œ ์‹ค์ „ ์›นํฌ๋กค๋ง CAMP ๊ฐ•์˜(1๊ธฐ, 2๊ธฐ, 3๊ธฐ) (2017.9 - 2018.3)
์›น์ด ๊ตฌ์„ฑ๋˜๋Š” ๋ฐฉ์‹๋ถ€ํ„ฐ python์˜ ์—ฌ๋Ÿฌ ํฌ๋กค๋ง ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ํ™œ์šฉํ•ด ์‹ค์ œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์ˆ˜์ค€์˜ ํฌ๋กค๋Ÿฌ๋ฅผ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋„๋ก ์ง„ํ–‰ํ•˜๋Š” ์‹ค์Šตํ˜• ํฌ๋กค๋ง ๊ฐ•์˜
[์ˆ˜์›๋Œ€ํ•™๊ต] ํŒŒ์ด์ฌ์„ ์ด์šฉํ•œ ์›น ํฌ๋กค๋Ÿฌ ๋งŒ๋“ค๊ธฐ ํŠน๊ฐ• (2017.11)
ํŒจ์ŠคํŠธ์บ ํผ์Šค์—์„œ ์ง„ํ–‰ํ•œ ๊ฐ•์˜ ๋‚ด์šฉ์„ ๊ธฐ๋ฐ˜์œผ๋กœ 1์ผ ํŠน๊ฐ• ์ง„ํ–‰
[ํ‚ค์›€์ฆ๊ถŒ] ํŒŒ์ด์ฌ ๋ฐ์ดํ„ฐ๋ถ„์„ ์ž…๋ฌธ ๊ฐ•์˜ (2017.5 - 2017.6)
Pandas ํŒจํ‚ค์ง€๋ฅผ ์ด์šฉํ•œ ๋ฐ์ดํ„ฐ ๋ถ„์„ ์ž…๋ฌธ ๊ฐ•์˜.
์ฆ๊ถŒ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•ด ๊ฐ„๋‹จํ•œ ๋ถ„์„์„ ํ•˜๋Š” ์‚ฌ๋ก€์™€ ํ•จ๊ป˜ ์‹ค์Šต์„ ์ง„ํ–‰ํ•จ
[NEXON] ์‚ฌ๋‚ด ํฌ๋กค๋ง ๊ฐ•์˜ & ์„œ๋ฒ„๋ฆฌ์Šค ๋”ฅ๋Ÿฌ๋‹ ๊ฐ•์˜
  • ํŒŒ์ด์ฝ˜ ๋ฐœํ‘œ ์ž๋ฃŒ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์ง„ํ–‰ํ•œ ํฌ๋กค๋ง ๊ฐ•์˜
  • MNIST๋ฅผ PyTorch์™€ CNN์„ ์ด์šฉํ•ด ๋งŒ๋“  Classification ๋ชจ๋ธ์„ ์ œ์ž‘ํ•˜๊ณ , ํ•ด๋‹น ๋ชจ๋ธ์„ AWS Lambda๋ฅผ ์ด์šฉํ•ด ์„œ๋ฒ„๋ฆฌ์Šค API๋กœ ๋งŒ๋“  ๋’ค Web Front ํŽ˜์ด์ง€๋ฅผ ์ œ์ž‘ํ•จ
ย 
ย