update research

This commit is contained in:
2025-06-14 04:43:58 +00:00
parent 8133f30ecc
commit 40403b573a
4 changed files with 22 additions and 0 deletions

View File

@@ -67,6 +67,28 @@
Microservices are a popular architecture due to their logical separation of concerns among multiple teams of developers. However, performance and scalability remain ongoing challenges with significant research focus. One approach to improving performance and scalability is caching. In this paper, we explore advanced caching strategies and evaluate their effectiveness in accessing data within microservices. We test different eviction policies, cache topologies, and data prefetching techniques on common access patterns. Our results show that these strategies perform well on select patterns, highlighting their potential to outperform state-of-the-art solutions such as MuCache. We hope that advanced strategies can serve as a drop-in upgrade for existing microservice caches. Microservices are a popular architecture due to their logical separation of concerns among multiple teams of developers. However, performance and scalability remain ongoing challenges with significant research focus. One approach to improving performance and scalability is caching. In this paper, we explore advanced caching strategies and evaluate their effectiveness in accessing data within microservices. We test different eviction policies, cache topologies, and data prefetching techniques on common access patterns. Our results show that these strategies perform well on select patterns, highlighting their potential to outperform state-of-the-art solutions such as MuCache. We hope that advanced strategies can serve as a drop-in upgrade for existing microservice caches.
</p> </p>
</div> </div>
<div class="overlay-container research">
<a class="overlay-content" href="/research/Lu_etal_Decision_Tree_Optimization_for_RMT_Architectures.pdf">
<h2>Decision Tree Optimization for RMT Architectures</h2>
</a>
<p>
Arthur Lu, Nathan Huey, Jai Parera, Krish Patel
</p>
<p class="abstract">
Processing packets in-network and learning meta-information about them is gaining traction in networking. By shifting computation from end nodes to the network, models can leverage the efficient computation afforded to routers by different architectures such as the reconfigurable match-action table (RMT). By performing this shift in computation while maintaining wire speeds (line rate), model inference can be performed at no cost to latency. Furthermore, Decision trees are naturally suited for the RMT model this due to their layered, hierarchical structure. Previous works have proposed this exact use of ternary content addressable memory (TCAM). We investigate the proposed implementation by simulating it in a Python ideal-RMT simulator and propose new optimizations for TCAM usage. We propose a new priority aware tree compression technique that reduces TCAM usage significantly without reducing model accuracy, and a range boundary optimization technique that additionally reduces TCAM usage at some cost to accuracy. We find that these optimizations are promising towards reducing the TCAM usage on RMT devices and discuss future directions for research.
</p>
</div>
<div class="overlay-container research">
<a class="overlay-content" href="/research/Baiget_etal_ExplainFuzz.pdf">
<h2>ExplainFuzz : Explainability and wellformedness for probabilistic test generation</h2>
</a>
<p>
Annaelle Baiget, London Bielicke, Arthur Lu, Brooke Simon
</p>
<p class="abstract">
Understanding and explaining the structure of generated test inputs is crucial for effective testing, debugging, and analysis of software systems. However, existing approaches—such as probabilistic context-free grammars (pCFGs) and large language models (LLMs)—lack the ability to provide fine-grained statistical explanations about generated test inputs and their structure. We introduce ExplainFuzz, a novel framework that leverages probabilistic circuits (PCs) to model and query the distribution of grammar-based inputs in an interpretable manner. Starting from a context-free grammar (CFG), we refactor it to support PC compilation, and train the resulting Probabilistic Circuit on a synthetically generated corpus produced with Grammarinator during a fuzzing campaign. The trained PC supports a variety of probabilistic queries, offering insight into the statistical distribution of generated inputs. Additionally, for the SQL domain, we demonstrate a custom generator that transforms PC generated samples into executable queries by leveraging PC's generation capabilities to enable concrete synthetic test input generation. We evaluate ExplainFuzz across multiple domains including SQL, REDIS, and JANUS, highlighting its ability to provide explainable, grammar-aware insights into test input structure. Our results show that ExplainFuzz outperforms traditional pCFGs and LLMs in terms of log-likelihood estimation and interpretability, contributing a new direction for explainable grammar-based fuzzing.
</p>
</div>
</main> </main>
<footer> <footer>
</footer> </footer>

Binary file not shown.