1 / 24

인공지능

Weka 기반 데이터 마이닝. 인공지능. 소속 : 컴퓨터 공학부 이름 : 김우빈 (201011321), 육근웅 (200611230), 임국현 (201011328), 주영진 (200913988). Contents Introduction Data Architecture Demo QnA. Contents Introduction. 1. DataMining Tool.

makan
Download Presentation

인공지능

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Weka기반 데이터 마이닝 인공지능 • 소속 : 컴퓨터 공학부 • 이름 : 김우빈 (201011321), 육근웅(200611230), 임국현 (201011328), 주영진 (200913988)

  2. Contents Introduction • Data Architecture • Demo • QnA

  3. Contents Introduction 1. DataMining Tool Waikato Environment for Knowledge Analysis is a popular suite of machine learning software written in Java, developed at the University of aikato, New Zealand. Weka is free software available under the GNU General Public License.

  4. Contents Introduction 2. Why using WEKA in the Project http://www.cs.waikato.ac.nz/ml/weka/

  5. Contents Introduction 2. Why using WEKA in the Project - Free availability under the GNU General Public License portability, since it is fully implemented in the Java - Programming language and thus runs on almost any modern computing platform - A comprehensive collection of data preprocessing and modeling techniques - Ease of use due to its graphical user interfaces

  6. Contents Introduction 2. Why using WEKA in the Project - The Preprocess panel has facilities for importing data from a database, a CSV file, etc., and for preprocessing this data using a so-called filtering algorithm. These filters can be used to transform the data (e.g., turning numeric attributes into discrete ones) and make it possible to delete instances and attributes according to specific criteria. - The Classify panel enables the user to apply classification and regression algorithms (indiscriminately called classifiers in Weka) to the resulting dataset, to estimate the accuracy of the resulting predictive model, and to visualize erroneous predictions, ROC curves, etc., or the model itself (if the model is amenable to visualization like, e.g., a decision tree).

  7. Contents Introduction 2. Why using WEKA in the Project - The Associate panel provides access to association rule learners that attempt to identify all important interrelationships between attributes in the data. - The Cluster panel gives access to the clustering techniques in Weka, e.g., the simple k-means algorithm. There is also an implementation of the expectation maximization algorithm for learning a mixture of normal distributions.

  8. Contents Introduction 2. Why using WEKA in the Project - The Select attributes panel provides algorithms for identifying the most predictive attributes in a dataset. - The Visualize panel shows a scatter plot matrix, where individual scatter plots can be selected and enlarged, and analyzed further using various selection operators

  9. Contents Introduction 3. Introduction Project 3-1 Project Theme - We use Data of LOL(League Of Legend) Played Game 3-2 Why Choose LOL Project - Recently almost gamers are playing this game - We think mind map “how to get win LOL?!” - Because we wannamake the game a little more interesting (new item combination. new strategy in picking Champion)

  10. Contents Introduction 4. Preprocessing 4-1 Clustering 4-1-1 Selecting a Clusterer - EM 4-1-2 Learning Clusters - The Cluster section, like the Classify section, has Start/Stop buttons, a result text area and a result list. These all behave just like their classification counterparts. Right-clicking an entry in the result list brings up a similar menu, except that it shows only two visualization options: Visualize cluster assignments and Visualize tree. The latter is grayed out when it is not applicable

  11. Contents Introduction 4. Preprocessing 4-2 Associating 4-2-1 Setting Up - FP Growth 4-2-2 Learning Associations - Once appropriate parameters for the association rule learner bave been set, click the Start button. When complete, right-clicking on an entry in the result list allows the results to be viewed or saved.

  12. Data Architecture @RELATION TestAssociate @ATTRIBUTE Sona { no, yes } @ATTRIBUTE Soraka { no, yes } @ATTRIBUTE Shen { no, yes } @ATTRIBUTE Shyvana { no, yes } @ATTRIBUTE Swain { no, yes } @ATTRIBUTE Skarner { no, yes } @DATA no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,yes,no,no,no,no,no,no,no,no,no,no,no,no,no,no,yes,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,yes,no,no,no,no,yes,yes no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,yes,no,yes,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,yes,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no,yes,no,yes,no,no,no,no,no,no,no,no,no,no,no,no,no,no,no

  13. Data Architecture View Clustering

  14. Data Architecture View Clustering

  15. demo View Clustering

  16. demo View Clustering

  17. demo View Clustering

  18. demo View Clustering

  19. demo View Associating

  20. Demo Confidence - P(B|A) = P(A,B) / P(A) - 선행 사건이 일어났을 때 후행 사건이 일어날 확률 - 항목 A를 가지고 있는 거래 중 항목 B를 가지고 있는 거래의 비율 Lift - P(B|A) / P(B) - 예측에 있어서 무작위 추측에 비해 규칙이 얼마나 더 우수한가 - Confidence를 후행 사건의 빈도로 나누어주는 것 . Lift = Confidence / P(B) . 의미 없는 규칙을 피하기 위함 (1보다 커야 유용한 규칙이라고 할 수 있음) . ex: A simple, numerical example Conf(A → B) = 0.9 If P(B) = 1, then Lift(A → B) = 0.9 If P(B) = 0.1, then Lift(A → B) = 9

  21. Demo

  22. Datamining and ai DATAMining and AI - Data Learning - Data 추론 - Data 결과

  23. Qna

  24. Thank you!!

More Related