update research
This commit is contained in:
@@ -56,6 +56,17 @@
|
||||
Large language models (LLMs) have a high potential for processing highly structured text inputs to generate grammar representations. Leveraging LLMs in generating grammar would reduce the time spent and effort required to create a grammar for fuzzing, unit testing, and input validation. In this project, we create a system that handles grammar creation using automated feedback and human feedback. We develop a pipeline for assisted generation of grammar on unseen domains. We show the potential for LLMs to generate complex grammars which can be used for many software testing applications and reflect on its limitations with complex unseen domains.
|
||||
</p>
|
||||
</div>
|
||||
<div class="overlay-container research">
|
||||
<a class="overlay-content" href="/research/Lu_etal_Evaluation_of_Caching_Strategies_for_Microservices.pdf">
|
||||
<h2>Evaluation of Caching Strategies for Microservices</h2>
|
||||
</a>
|
||||
<p>
|
||||
Arthur Lu, Derek Wang, Isha Atul Pardikar, Purva Gaikwad, Xuanzhe Han
|
||||
</p>
|
||||
<p class="abstract">
|
||||
Microservices are a popular architecture due to their logical separation of concerns among multiple teams of developers. However, performance and scalability remain ongoing challenges with significant research focus. One approach to improving performance and scalability is caching. In this paper, we explore advanced caching strategies and evaluate their effectiveness in accessing data within microservices. We test different eviction policies, cache topologies, and data prefetching techniques on common access patterns. Our results show that these strategies perform well on select patterns, highlighting their potential to outperform state-of-the-art solutions such as MuCache. We hope that advanced strategies can serve as a drop-in upgrade for existing microservice caches.
|
||||
</p>
|
||||
</div>
|
||||
</main>
|
||||
<footer>
|
||||
</footer>
|
||||
|
Binary file not shown.
Reference in New Issue
Block a user