IJSTR

International Journal of Scientific & Technology Research

Home About Us Scope Editorial Board Blog/Latest News Contact Us
0.2
2019CiteScore
 
10th percentile
Powered by  Scopus
Scopus coverage:
Nov 2018 to May 2020

CALL FOR PAPERS
AUTHORS
DOWNLOADS
CONTACT

IJSTR >> Volume 8 - Issue 11, November 2019 Edition



International Journal of Scientific & Technology Research  
International Journal of Scientific & Technology Research

Website: http://www.ijstr.org

ISSN 2277-8616



Mining Knowledge Of The Directed Acyclic Graph (DAG) And Dataset Using The Hill Climbing Algorithm

[Full Text]

 

AUTHOR(S)

Munirah, Aslan Alwi

 

KEYWORDS

Hill Climbing algorithm, Set of rules, Mining Knowledge, Datasets, Optimal DAG, IF-Then, Bayes Network.

 

ABSTRACT

Hill Climbing Algorithm is used by people to produce bayes (bayes network) symptoms in the form of directed acyclic grpah (DAG). With this algorithm look for the optimal DAG of a dataset. However, a DAG is a symptom of causality / causation of bayes so that the optimal DAG search of a dataset is equivalent to the search for symptom causality that is most likely (optimum) between attributes or data variables. This means finding knowledge in the form of a causal relationship. Therefore, it is reasonable to mine the form of knowledge expressed in the form of rules from DAG by converting trending arrows between nodes as if-then relationships between variables. In this study, it was proposed how to mine knowledge (set of rules) from the dataset by using the optimal DAG from a dataset assuming that the optimal DAG produces the most optimal set of rules. Rule mining in this way uses hill climbing algorithms as a tool to produce optimal DAG. There are algorithms other than hill climbing such as ACO or Genetic algorithms, but the choice is dropped on hill climbing algorithms as the first trial of research.

 

REFERENCES

[1] Quinlan, J.R., 1986. Induction of Decision Trees. Expert Systems, pp.81–106
[2] Wang, Y. & Witten, I.H., 1997. Inducing Model Trees for Continuous Classes. European Conference on Machine Learning (ECML), pp.1–10. Available at: http://www.cs.waikato.ac.nz/~ml/publications/1997/Wang-Witten-Induct.pdf
[3] Provost, F., 2003. Tree Induction for Probability-Based Ranking. , 5, pp.199–215
[4] Clark, P. & Niblett, T., 1989. The CN2 Induction algorithm. Machine Learning, 3(4), pp.261–284. Available at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.51.3672&rep=rep1&type=pdf
[5] Khera, D. et al., 1991. Knowledge Verification of Machine-learning Procedures Based on Test Structure Measurements. , 4(1), pp.145–149
[6] Cohen, W.W., 1995. Fast Effective Rule Induction. Machine Learning: Proceedings of the Twelfth International Conference
[7] Frank, E. & Witten, I.H., 1998. Generating accurate rule sets without global optimization. Proceedings of the Fifteenth International Conference on Machine Learning, pp.144–151. Available at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.143.8073&rep=rep1&type=pdf
[8] Agrawal, R. & Srikant, R., 1994. Fast algorithms for mining association rules. In Proceedings of the 20th VLDB conference, pp.487–499
[9] Agrawal, R., Imieliński, T. & Swami, A., 1993. Mining association rules between sets of items in large databases. ACM SIGMOD Record, 22(2), pp.207–216. Available at: http://portal.acm.org/citation.cfm?doid=170036.170072
[10] Kavsek, B., Lavrac, N. & Jovanoski, V., 2003. APRIORI-SD : Adapting Association Rule. Learning, pp.230–241
[11] Zaki, M.J., 2000. Scalable algorithms for association mining. IEEE Transactions on Knowledge and Data Engineering, 12(3), pp.372–390
[12] Han, J. et al., 2004. Mining frequent patterns without candidate generation: A frequent-pattern tree approach. Data Mining and Knowledge Discovery, 8(1), pp.53–87
[13] Djenouri, Y. et al., 2014. An Efficient Measure for Evaluating Association Rules. , pp.406–410
[14] Towell, G.G. & Shavlik, J.W., 1993. Extracting Refined Rules from Knowledge-Based Neural Networks. Machine Learning, 13(1), pp.71–101
[15] Craven, M.W. & Shavlik, J.W., 1994. Using sampling and queries to extract rules from trained neural networks. Machine Learning: Proceedings of the Eleventh International Conference, pp.37–45
[16] Duch, W., Adamczak, R. & Gra̧bczewski, K., 2001. A new methodology of extraction, optimization and application of crisp and fuzzy logical rules. IEEE Transactions on Neural Networks, 12(2), pp.277–306
[17] Pawlak, Z., 1996. Rough sets and data analysis. Soft Computing in Intelligent Systems and Information Processing. Proceedings of the 1996 Asian Fuzzy Systems Symposium, 147, pp.1–12
[18] Grzymala-busse, J.W., 2005. Rough Set Theory with Applications to Data Mining. Real World Applications of Computational Intelligence, Volume 179, p.pp 221-244
[19] Greco, S. et al., 2001. An algorithm for induction of decision rules consistent with the dominance principle. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2005, pp.304–313