by Piyush Goel (@pigol1) on Friday, 29 June 2012
- Big Data Infrastructure & Processing
- Session type
- Technical level
Discuss how to use big-data systems for analyzing Log data in real time.
Any complex OLTP system comprises of many distributed modules running on separate nodes. Failure of a single module or a bug in any one can cause a request to fail completely or respond with incorrect data. Debug and error level logs for each module helps developers to troubleshoot such issues. But for a large scale distributed system collecting the log files from all nodes and analyzing them is a cumbersome task because of the unstructured/semi-structured nature of the log entries.
To address the above challenge, the platforms team at Capillary Technologies has developed a Log Processing Framework which helps us understand and analyze all the logs at a single place. The framework has been built using Flume, MongoDB, Hive, Hadoop and custom components which provides us the capability to analyze large amounts of log data using Map Reduce Jobs. The framework enables us to search the exact causes of failure and performance metrics from large amounts of raw text using SQL like interface in near realtime.
This talk will focus on the design decisions that were made and the challenges that we encountered while building this framework.
Basic understanding of MongoDB, Hive/Hadoop.
Pravanjan Choudhury is an Architect with Platforms & Applications group at Capillary Technologies. He has more than 9 years of experience in building mission-critical large scale software & cloud based products. Previously, he was an Architect in Minekey, a silicon valley start-up and in the past has done consulting for several research projects of National Semiconductor.
Piyush Goel is Team Lead with the Platforms group at Capillary Technologies. Before joining Capillary he worked at Yahoo! Bangalore as Senior Software Engineer with the Emerging Markets team. He holds a BTech & MTech in Computer Science from Indian Institute of Technology, Kharagpur.