-
week9
- 4 August
- Fix threading bugs in the system.
- Implement Custom Retry for Kazoo instead of using client.retry Helper so that we could control the amount of Retrying time and other retry related processes.
- Implement Queue to store current running executing path in the dispatcher
- 3 August
- Fix Watcher run and run_task function for Watcher Thread
- Add MongoDB Client and MongoDB Saving and Getting Function in to the system.
- 2 August
- Create Presentation Slide Outline
- 1 August
- Test Dispatcher Thread
- Implement Watcher Class in dispatcher.py
- Create Dispatcher Excecutor Class and Necessary Function dispatcher_exec.py
- 31 July
- Create Dispatcher Package
-
week8
- 28 July
- Test and complete Compute Class
- Run and Fix bugs for Compute Node
- 27 July
- Redesign and create final draft for Summer Student Poster Session using Zookeeper as a main Feature Framework
- 26 July
- Complete running process for worker as shown here
- Override run function of Thread to make it compatible with the Design.
- 25 July
- Redefine Compute Class based on Mesos Slave by integrating with ZooKeeper Logic
- Redefine the constructor for Slave
- Add simple logic for Zookeeper compute
- 24 July
- Understand ZooKeeper Technique using as a job Coordination
- Design Query Engine using ZooKeeper as a Coordinator
- Implement Queue using Kazoo
-
week7
- 21 July
- design and create poster draft for Summer Student Poster Session
- 20 July
- Set up and Run ZooKeeper to test Kazoo(Python API for ZooKeeper) on CERN Openstack Private Cloud
- 19 July
- Study on ZooKeeper Recipe which includes Bariers Queue Locks and Leader Election
- 18 July
- Understand ZooKeeper DataModel ,Architecture and Fundamental Workflow.
- Fundamental Topic includes Session, Watches, Namespace, ZNodes Types, Ensemble, ACLs(Access Control List) ,Consistency Guarantee and general Pitfall.
- 17 July
- Study on Overview of Zookeeper
-
week6
- 14 July
- 13 July
- Study on “Advanced Scheduler Techniques” to understand more and to be able to efficiently write Good Scheduler. The topic are the followings. The first topic is “Distributed Communication” which enhance the ability to receive work over HTTP API from client by letting both the leader and hot spare Schedulers to process HTTP requests with several Techniques such as “Shared Database”, “Redirect modification requests”, “The data is the truth”. Forced Failover and Resource Consolidate is also one of the topic being studied.
- Plan to use Framework UI
- Implement simple Executor to understand more on Mesos Executor functionality
- 12 July
- Implement Do Fit First Algorithm to fully utilize the resource
- Add Job state to keep track of jobs(what user want to do) and to separate the notion of the job and task(code that Mesos runs on the Cluster)
- Add Job Saving function for each state in the Job State Machine
- Research more on Reconciliation
- 11 July
- Create Mesos Application which could read command from JSON file as shown app_hello.py and and HelloWorldScheduler.
- Implement Job Class for managing Scheduler job on Mesos.
- 10 July
- Research more on “Building Mesos Application” book
- Implement the “Hello World Version” of Mesos
-
week5
- 7 July
- Understand and Follow the instruction given in the “Building Mesos Application” creating framework to be run on Marathon by trying to implement one.
- Test one slave failure on Mesos
- 6 July
- Research more on running instances from images and how to access the instances
- Research more on Docker to create an image for the Marathon
- 5 July
- [Solved] Access Mesos Dashboard in Remote server with SSH Tunneling
- Figure out how to run job on the mesos cluster with Marathon.
- Start researching on Marathon if it can satisfy femto design or not
- 4 July
- Understand Marathon Basic at shown here
- [Solved] Access Marathon Dashboard in Remote server with SSH Tunneling
- [Not Solved] Access Mesos Dashboard in Remote server with SSH Tunneling
- 3 July
- Figure out how to run Mesos Application on Mesos Cluster
- Understand SSH Tunneling
- [Not Solved] Access Marathon Dashboard in Remote server with SSH Tunneling
- [Not Solved] Access Mesos Dashboard in Remote server with SSH Tunneling
-
week4
- 30 June
- Understand Mesos Proto Interface, MesosSchedulerDriver, Mesos Scheduler Interface
- Implement Framework with some easy data processing task. References are Python Mesos Example, Hello World Example and Building Applications on Mesos By David Greenberg
- 29 June
- Re-Implement Dispatcher (Mesos Framework) to run with Mesos Application
- Modify the test_framework script of Mesos to run femto-mesos project with Mesos on the GCloud Machine
- [Unresolved] Try to find the way to deploy Mesos in Cluster not a standalone server
- 28 June
- Get Mesos to run on the Google Cloud in order to test API
- Implement Mesos framework(Application) by letting it run like Mesos
- Implement Map Class which is the dictionary with the ability to use dot notation to access the key. [Object like Dictionary]
- 27 June
- Attend the Particle Physics for summer student lecture
- Understand how to build Mesos framework (Application) and Try to implement one.
- Try running Mesos on GCloud to understand more about Mesos
- 26 June
- Finish the basic simulation for Compute Node (Slave) with the code for test running in app_slave.py. To test this run slave_app.py as shown below
The result is shown in this Figure
$ python app_slave.py
-
week3
- 23 June
- Correct my implementation design with Jim
- Start Implementing Multithread Infinite loop to simulate Compute Node (Slave) Behavior
- 22 June
- 21 June
- Understand Mesos design, architecture and distributed scheduling model
- Start implementing project which demonstrates how femtocode work with Mesos and evaluates the performance of the design as shown femto-mesos repo
- 20 June
- Try to solve the issue about running Python on Mesos local Machine.
- [Solved] Cannot Import mesos.interface -> Solved by pip install or download binary of the library.
- [UnSolved] Cannot Import mesos.native even there is a mesos native on the build install
- Study the original paper of Mesos and its documentation
- 19 June
- Try to install Mesos on the local Machine. And talk with Jim about new design when we discuss how the Mesos offers the resource to their framework.
- Research if it is possible to implement the new design with Mesos or not. [The new design is to let the Mesos framework know about their resource immediately by resource’s
dataset
andgroupid
and let the Mesos master spawns new workers if there are no available resource for the framework. ]
-
week2
- 16 June
- Discuss about the project in femtocode which I have to do the proof of concept.
- Try Figuring out the ability of Mesos which could be used as a tool to fullfil the design of femtocode
- 15 June
- Trying to solve ldrdio dependecies issue in standalone version
- Hands on Python Threading programming to understand more about python Threading.
- 14 June
- Discuss about the femtocode design with Jim Pivarski
- Fix the path bugs on local machine by making the relative path instead of fix path in line 8 as shown in of this commit
- Watch the femtocode querying presentation back in April to get more idea on the project
- Try running Standalone code to compare with the distributed version (server)
- 13 June
- Set up this Summer Student Devlog
- Walkthrough the code of femtocode mostly on compute.py and dispatch.py
- Make it more general for llvmnames to get read from cpython name to run on serveral Machine in this commit
- Find the bugs on the local machine associated with the input file path
- Study related architecture for MongoDB Sharding and Replication
- 12 June
Tasks done for today are as follows: - Discuss about the femtocode basic and inspiration with Jim Pivarski
- Study on ROOT Data Structure
-
week1
- 9 June
- Learn and understand the basic of femtocode and its design via this slide
- 8 June
- Work around with ROOT to truely understand its basic
- Discuss and talk with David about of DIANA-HEP
- 7 June
- Understand fundamental of ROOT , WLCG, LHC
- Figure out MongoDB Sharding and Replication
- 6 June
- join the induction session at CERN and prepare for the coming work
- Meet with supervisor, David Lange