1 / 26

Milestone 1

Milestone 1. Workshop in Information Security – Distributed Databases Project. By: Yosi Barad , Ainat Chervin and Ilia Oshmiansky. Project web site: http://infosecdd.yolasite.com. Access Control Security vs. Performance. Milestone 1:. Our Plan:. Plan Step:.

brendy
Download Presentation

Milestone 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Milestone 1 Workshop in Information Security – Distributed Databases Project By: YosiBarad, AinatChervin and IliaOshmiansky Project web site: http://infosecdd.yolasite.com Access Control Security vs. Performance

  2. Milestone 1: Our Plan:

  3. Plan Step: We have installed Cassandra on the lab computers

  4. Plan Step: Cassandra database is configured and capable to run in 2 different modes: 1) One cluster consisting of one node which manages all the keys and values in the database. 2) One cluster consisting of two nodes which share the keys values and they manage and store 50% of the database each.

  5. Plan Step: • We have installed and built the YCSB++ source code • We used YCSB++ with the "basic" database configuration supplied ,in order to test the benchmark framework

  6. Plan Step: • We used Cassandra client shell in order to create keyspaces, column families, add a column within a family and for storing and retrieving key names and values. • Cassandra supplies statistics for these manual operations so we could get the idea of how much time each operation consumes.

  7. Plan Step: • We used Cassandra-10 client binding supplied by the YCSB++ database in order to connect to the Cassandra database. • We ran some core benchmark tests and the results are further detailed later on in this document.

  8. Plan Step: • First we ran the tests from one client pc to a Cassandra server consisting of a single node. • Next we added another Cassandra node and re-conducted the same tests.

  9. Plan Step: We ran the tests for these reasons: Establish a baseline by which future results (post implementation of cell level ACL) will be judged. Establish the maximal throughput of Cassandra on a single node. Compare the performance of a Cassandra with one node to Cassandra with two node.

  10. Plan Step: • We created several scripts to automate the test. • For example script that would: • 1) run all the different workloads YCSB++ offers with different numbers of threads • 2) Create an output file with the relevant results

  11. Plan Step:

  12. Plan Step: We used the core workloads that are included with the YCSB installation and ran them all 8 times each. Each time we increased the number of threads. Workload A: Update heavy workload - mix of 50/50 reads and writes.

  13. Plan Step: Workload B: Read mostly workload – This workload has a 95/5 reads/write mix.

  14. Plan Step: Workload C: Read only - This workload is 100% read.

  15. Plan Step: Workload D: Read latest workload - In this workload, new records are inserted, and the most recently inserted records are the most popular.

  16. Plan Step: Workload E: Short ranges - In this workload, short ranges of records are queried, instead of individual records.

  17. Plan Step: Workload F: Read-modify-write - In this workload, the client will read a record, modify it, and write back the changes.

  18. Connect YCSB++ to Cassandra and run benchmark tests Plan Step: • We noticed a general degradation in performance regarding the Cassandra 2 nodes configuration • We assume it is due to the synchronization overhead between the two nodes. • More work has to be done in order to explain these results. (see plans ahead)

  19. Plan Step: • We have installed, configured and ran - apache Zookeeper and apache Hadoop as they are prerequisites for the Accumulo database. • We made sure it works by performing several basic operations using the client shell

  20. Milestone 1 Progress Compared to Plan:

  21. Plans for ahead Milestone 1 • 1. Extend our Accumulo and Cassandra setups to include several clusters- • This stage is critical in order to get real meaningful test results and for finding security holes in the later stages.

  22. Plans for ahead Milestone 1 • 2. Improve our testing environment- • This stage includes the following: • Write our own workloads (with ACL) • Run several clients simultaneously • Edit the test configurations according to our test plan (technical details)

  23. Plans for ahead Milestone 1 • Run diverse tests to understand the limiting factors in each test (might be the testing equipment, CPU-time, disk I/O, network limitations, synchronization overhead between nodes and much more). and if possible - change the setup to eliminate this limiting factor. • Analyze the CPU and disk usage of the machines to understand the results better.

  24. Plans for ahead Milestone 1 • 3. Get into the Cassandra code and start the cell-level ACL implementation- • There are two main options: • Sending JSON strings as part of the HTTP requests then storing them in Cassandra.

  25. Plans for ahead Milestone 1 • Adding simple strings like: "(Alice, rx) (Bob, rwxo) (Charlie, rx) ..." we can store in Cassandra as is and when Alice will try to read a file from Cassandra we will check that the ACL allows her to do so.

  26. Overall Milestone 1 • We managed to complete the milestone as planned • Moreover, we succeeded in extending the system to two nodes. • This is quite a breakthrough given the difficulties we experienced with the installations. And it brings us that much closer to achieving the goal in milestone#2, which is running a system consisting of several clusters.

More Related