Raghu Nambiar

Raghu Nambiar

Houston, Texas, United States
14K followers 500+ connections

About

Raghu Nambiar is a Corporate Vice President at AMD where he leads a global engineering…

Articles by Raghu

See all articles

Activity

Join now to see all activity

Experience

  • AMD Graphic

    AMD

    San Francisco Bay Area

  • -

  • -

  • -

  • -

  • -

    San Francisco Bay Area

  • -

  • -

  • -

  • -

  • -

  • -

  • -

  • -

Education

Publications

  • Performance Evaluation and Benchmarking (Book 19)

    Springer

    Performance Evaluation and Benchmarking
    14th TPC Technology Conference, TPCTC 2022, Sydney, NSW, Australia, September 5, 2022, Revised Selected Papers

    See publication
  • (Book 18) Performance Evaluation and Benchmarking

    Springer Lecture Notes in Computer Science (LNCS)

    Other authors
    See publication
  • (Book 17) Performance Evaluation and Benchmarking

    Springer Lecture Notes in Computer Science (LNCS)

    Other authors
    See publication
  • (Book 16) Performance Evaluation and Benchmarking

    Springer Lecture Notes in Computer Science (LNCS)

    Other authors
    See publication
  • (Book 15) Performance Evaluation and Benchmarking for the Era of Cloud(s)

    Springer Lecture Notes in Computer Science (LNCS)

    Other authors
    See publication
  • A Benchmark Proposal for Massive Scale Inference System

    ACM

    Many benchmarks have been proposed to measure the training/learning aspects of Artificial Intelligence systems. This is without doubt very important, because its methods are very computationally expensive, and, therefore, offering a wide variety of techniques to optimize the computational performance.The inference aspect of Artificial Intelligence systems is becoming increasingly important as the these system are starting to massive scale. However, there are no industry standards yet that…

    Many benchmarks have been proposed to measure the training/learning aspects of Artificial Intelligence systems. This is without doubt very important, because its methods are very computationally expensive, and, therefore, offering a wide variety of techniques to optimize the computational performance.The inference aspect of Artificial Intelligence systems is becoming increasingly important as the these system are starting to massive scale. However, there are no industry standards yet that measures the performance capabilities of massive scale AI deployments that must per-form very large number of complex inferences in parallel. In this work-in-progress paper we describe TPC-I, the industry's first benchmark to measure the performance characteristics of massive scale industry inference deployments. It models a representative use case, which enables hard- and software optimizations to directly benefit real customer scenarios.

    Other authors
    See publication
  • Defining Industry Standards for Benchmarking Artificial Intelligence

    Spinger

    Introduced in 2009, the Technology Conference on Performance Evaluation and Benchmarking (TPCTC) is a forum bringing together industry experts and researchers to develop innovative techniques for evaluation, measurement and characterization. This panel at the tenth TPC Technology Conference on Performance Evaluation and Benchmarking (TPCTC 2018) brought together industry experts and researchers from a broad spectrum of interests in the field of Artificial Intelligence (AI).

    Other authors
    See publication
  • Book (14): Performance Evaluation and Benchmarking for the Era of Artificial Intelligence

    Springer

    This book constitutes the thoroughly refereed post-conference proceedings of the 10th TPC Technology Conference on Performance Evaluation and Benchmarking, TPCTC 2018, held in conjunction with the 44th International Conference on Very Large Databases (VLDB 2018) in August 2018.

    See publication
  • Encyclopedia of Big Data Technologies. Living Edition

    Springer

    Encyclopedia of Big Data Technologies. Living Edition

    Other authors
    See publication
  • Analysis of TPCx-IoT: The First Industry Standard Benchmark for IoT Gateway Systems.

    IEEE

    By 2020 it is estimated that 20 billion devices will be connected to the Internet. While the initial hype around this Internet of Things (IoT) stems from consumer use cases, the number of devices and data from enterprise use cases is significant in terms of market share. With companies being challenged to choose the right digital infrastructure from different providers, there is an pressing need to objectively measure the hardware, operating system, data storage, and data management systems…

    By 2020 it is estimated that 20 billion devices will be connected to the Internet. While the initial hype around this Internet of Things (IoT) stems from consumer use cases, the number of devices and data from enterprise use cases is significant in terms of market share. With companies being challenged to choose the right digital infrastructure from different providers, there is an pressing need to objectively measure the hardware, operating system, data storage, and data management systems that can ingest, persist, and process the massive amounts of data arriving from sensors (edge devices). The Transaction Processing Performance Council (TPC) recently released the first industry standard benchmark for measuring the performance of gateway systems, TPCx-IoT. In this paper, we provide a detailed description of TPCx-IoT, mention design decisions behind key elements of this benchmark, and experimentally analyze how TPCx-IoT measures the performance of IoT gateway systems.

    Other authors
    See publication
  • Towards an Industry Standard for Benchmarking Artificial Intelligence Systems.

    IEEE

    Over the past three decades, the Transaction Processing Performance Council (TPC) has developed many standards for performance benchmarking. These standards have been a significant driving force behind the development of faster, less expensive, and more energy efficient systems. Historically, we have seen benchmark standards for transaction processing, decision support systems and virtualization. In the recent years the TPC has developed benchmark standards for emerging areas such as big data…

    Over the past three decades, the Transaction Processing Performance Council (TPC) has developed many standards for performance benchmarking. These standards have been a significant driving force behind the development of faster, less expensive, and more energy efficient systems. Historically, we have seen benchmark standards for transaction processing, decision support systems and virtualization. In the recent years the TPC has developed benchmark standards for emerging areas such as big data analytics (BDA), the Internet of Things (IoT), database virtualization and Hyper-Convergence Infrastructure (HCI). This short paper discusses the TPC's plans for creating benchmark standards for Artificial Intelligence (AI) systems.

    See publication
  • Smart cities: Challenges and opportunities.

    IEEE

    This paper provides a high-level overview of the impact of projected population growth on urban living, and the challenges facing our increasingly crowded cities. We will highlight recent technological advances which can be used to address these challenges, then review specific opportunities including both current use cases for these technologies, as well as future applications based on projected capabilities.

    Other authors
    See publication
  • 2017 IEEE International Conference on Big Data Proceedings

    IEEE

    2017 IEEE International Conference on Big Data Proceedings

    Other authors
    See publication
  • Book (13): Performance Evaluation and Benchmarking for the Analytics Era

    Springer

    This book constitutes the thoroughly refereed post-conference proceedings of the 9th TPC Technology Conference, on Performance Evaluation and Benchmarking, TPCTC 2017, held in conjunction with the 43rd International Conference on Very Large Databases (VLDB 2017) in August/September 2017.

    Other authors
    See publication
  • Industry Standards for the Analytics Era: TPC Roadmap

    Springer

    The Transaction Processing Performance Council (TPC) is a non-profit organization focused on developing data-centric benchmark standards and disseminating objective, verifiable performance data to industry. This paper provides a high-level summary of TPC benchmark standards, technology conference initiative, and new development activities in progress.

    Other authors
    See publication
  • TPCx-HS v2: Transforming with Technology Changes.

    Springer

    The TPCx-HS Hadoop benchmark has helped drive competition in the Big Data marketplace and has proven to be a successful industry standard benchmark for Hadoop systems. However, the Big Data landscape has rapidly changed since its initial release in 2014. Key technologies have matured, while new ones have risen to prominence in an effort to keep pace with the exponential expansion of datasets. In this paper, we introduce TPCx-HS v2 that is designed to address these changes in the Big Data…

    The TPCx-HS Hadoop benchmark has helped drive competition in the Big Data marketplace and has proven to be a successful industry standard benchmark for Hadoop systems. However, the Big Data landscape has rapidly changed since its initial release in 2014. Key technologies have matured, while new ones have risen to prominence in an effort to keep pace with the exponential expansion of datasets. In this paper, we introduce TPCx-HS v2 that is designed to address these changes in the Big Data technology landscape and stress both the hardware and software stacks including the execution engine (MapReduce or Spark) and Hadoop Filesystem API compatible layers for both on-premise and cloud deployments.

    Other authors
    See publication
  • Transforming Industry Through Data Analytics (Book 12)

    O’Reilly Media, Inc

    The information technology revolutions over the past six decades have been astonishing, from mainframes to personal computers to smart and connected economies. But those changes pale in comparison to what’s about to happen. By 2020, seven billion people and roughly 50 billion devices will be connected to the internet, leaving the world awash in data. How do we make sense of it all?
    In this insightful book, Raghunath Nambiar examines the role of analytics in enabling digital transformation…

    The information technology revolutions over the past six decades have been astonishing, from mainframes to personal computers to smart and connected economies. But those changes pale in comparison to what’s about to happen. By 2020, seven billion people and roughly 50 billion devices will be connected to the internet, leaving the world awash in data. How do we make sense of it all?
    In this insightful book, Raghunath Nambiar examines the role of analytics in enabling digital transformation within the enterprise, including challenges associated with the explosion of data.

    See publication
  • Benchmarking your IoT gateway systems

    Datacenter Dynamics

    The Internet of Things (IoT) is transforming entire industries, the way companies do business, and it impacts all of us on a personal level as well. Yet until now, comparing performance and price/performance for various IoT deployments and configurations has been impossible. A new benchmark standard created by the Transaction Processing Performance Council (TPC) – named TPC Express Benchmark IoT (TPCx-IoT) – is now available, and enables you to compare performance and total cost of ownership…

    The Internet of Things (IoT) is transforming entire industries, the way companies do business, and it impacts all of us on a personal level as well. Yet until now, comparing performance and price/performance for various IoT deployments and configurations has been impossible. A new benchmark standard created by the Transaction Processing Performance Council (TPC) – named TPC Express Benchmark IoT (TPCx-IoT) – is now available, and enables you to compare performance and total cost of ownership (TCO) for gateway systems.

    See publication
  • Book (11): Performance Evaluation and Benchmarking. Traditional - Big Data - Internet of Things

    Springer Verlag, Lecture Notes in Computer Science (LNCS)

    This book constitutes the thoroughly refereed post-conference proceedings of the 8th TPC Technology Conference, on Performance Evaluation and Benchmarking, TPCTC 2016, held in conjunction with the 41st International Conference on Very Large Databases (VLDB 2016) in New Delhi, India, in September 2016.

    Other authors
    See publication
  • Industry Standards for the Analytics Era

    Springer

    The Transaction Processing Performance Council (TPC) is a non-profit organization focused on developing data-centric benchmark standards and disseminating objective, verifiable performance data to industry. This paper provides a high-level summary of TPC benchmark standards, technology conference initiative, and new development activities in progress.

    See publication
  • Lessons Learned: Performance Tuning for Hadoop Systems

    Springer

    Hadoop has become a strategic data platform for by mainstream enterprises, adopted because it offers one of the fastest paths for businesses take to unlock value from big data while building on existing investments. Hadoop is a distributed framework based on Java that is designed to work with applications implemented using MapReduce modeling. This distributed framework enables the platform to pass the load to thousands of nodes across the whole Hadoop cluster.

    Other authors
    See publication
  • Book (10): Big Data and Analytics for Dummies

    Wiley Brand Company

    Big data and analytics is a rapidly expanding field. Big data incorporates technologies and practices designed to support the collection, storage, and management of a wide variety of data types that are produced at ever‐increasing rates. Analytics combine statistics, machine learning, and data preprocessing to extract valuable insights from big data.

    Other authors
    See publication
  • Book (8): Performance Evaluation and Benchmarking: Traditional to Big Data to Internet of Things

    Springer Verlag, Lecture Notes in Computer Science (LNCS)

    This book constitutes the thoroughly refereed post-conference proceedings of the 7th TPC Technology Conference on Performance Evaluation and Benchmarking, TPCTC 2015, held in conjunction with the 40th International Conference on Very Large Databases (VLDB 2015) in Kohala Coast, Hawaii, USA, in August/September 2015.

    See publication
  • Book (9): Big Data Benchmarking

    Springer

    This book constitutes the thoroughly refereed post-workshop proceedings of the 6th International Workshop on Big Data Benchmarking, WBDB 2015, held in Toronto, ON, Canada, in June 2015 and the 7th International Workshop, WBDB 2015, held in New Delhi, India, in December 2015.

    Other authors
    See publication
  • Enhancing Data Generation in TPCx-HS with a Non-uniform Random Distribution

    Springer

    Developed by the Transaction Processing Performance Council, the TPC Express
    Benchmark™ HS (TPCx-HS) is the industry's first standard for benchmarking big data
    systems. It is designed to provide an objective measure of hardware, operating system and
    commercial Apache Hadoop File System API compatible software distributions, and to
    provide the industry with verifiable performance, price-performance and availability metrics.

    Other authors
    See publication
  • What’s New About the TPC-DS 2.0 Big Data Benchmark?

    DataInformed.com

    TPC-DS 2.0 is the first industry standard benchmark for measuring the end-to-end performance of SQL-based big data systems. Building upon the well-studied TPC-DS benchmark 1.0, Version 2.0 was specifically designed for SQL-based big data while retaining all key characteristics of a decision support benchmark.

    Other authors
    See publication
  • Vendor-Neutral Benchmarks Drive Tech Innovation

    DataInformed.com

    Two of the major technology trends for the next decade are big data and IoT, which is why the TPC is now focused on developing benchmarks in these two arenas.

    See publication
  • Reinventing the TPC: From Traditional to Big Data to Internet of Things

    The Transaction Processing Performance Council (TPC) has made significant
    contributions to the industry and research with standards that encourage fair competition to
    accelerate product development and enhancements. Technology disruptions are changing
    the industry landscape faster than ever. This paper provides a high level summary of the
    history of the TPC and recent initiatives to make sure that it is a relevant organization in the
    age of digital transformation fueled by…

    The Transaction Processing Performance Council (TPC) has made significant
    contributions to the industry and research with standards that encourage fair competition to
    accelerate product development and enhancements. Technology disruptions are changing
    the industry landscape faster than ever. This paper provides a high level summary of the
    history of the TPC and recent initiatives to make sure that it is a relevant organization in the
    age of digital transformation fueled by Big Data and the Internet of Things.

    See publication
  • Book (7): Performance Characterization and Benchmarking. Traditional to Big Data

    Springer Verlag, Lecture Notes in Computer Science (LNCS)

    This book contains the proceedings of the sixth TPC Technology Conference on Performance Evaluation and Benchmarking (TPCTC 2014), held in conjunction with the 40th International Conference on Very Large Data Bases (VLDB 2014) in Hangzhou, China from September 2nd to September 5th, 2014 including twelve selected peer-reviewed papers.

    Other authors
    See publication
  • Book (6): Advancing Big Data Benchmarks

    Springer International Publishing

    Proceedings of the 2013 Workshop Series on Big Data Benchmarking, WBDB.cn, Xi'an, China, July16-17, 2013 and WBDB.us, San José, CA, USA, October 9-10, 2013, Revised Selected Papers

    Other authors
    See publication
  • A Standard for Benchmarking Big Data Systems

    IEEE

    Big Data technologies like Hadoop have become an important part of the enterprise IT ecosystem. TPC Express Benchmark™HS (TPCx-HS) was developed to provide an objective measure of hardware, operating system and commercial Apache Hadoop File System API compatible software distributions, and to provide the industry with verifiable performance, price-performance, and availability and energy efficiency metrics. The benchmark models a continuous system availability of 24 hours a day, 7 days a week.

    See publication
  • Discussion of BigBench: A Proposed Industry Standard Performance Benchmark for Big Data

    Springer

    Enterprises perceive a huge opportunity in mining information that can be found in big data. New storage systems and processing paradigms are allowing for ever larger data sets to be collected and analyzed. The high demand for data analytics and rapid development in technologies has led to a sizable ecosystem of big data processing systems. However, the lack of established, standardized benchmarks makes
    it di?cult for users to choose the appropriate systems that suit their requirements. To…

    Enterprises perceive a huge opportunity in mining information that can be found in big data. New storage systems and processing paradigms are allowing for ever larger data sets to be collected and analyzed. The high demand for data analytics and rapid development in technologies has led to a sizable ecosystem of big data processing systems. However, the lack of established, standardized benchmarks makes
    it di?cult for users to choose the appropriate systems that suit their requirements. To address this problem, we have developed the BigBench benchmark specification.

    Other authors
    See publication
  • Discussion of BigBench: A Proposed Industry Standard Performance Benchmark for Big Data

    Springer Verlag

    Enterprises perceive a huge opportunity in mining information that can be found in big data. New storage systems and processing paradigms are allowing for ever larger data sets to be collected and analyzed. The high demand for data analytics and rapid development in technologies has led to a sizable ecosystem of big data processing systems. However, the lack of established, standardized benchmarks makes it difficult for users to choose the appropriate systems that suit their requirements. To…

    Enterprises perceive a huge opportunity in mining information that can be found in big data. New storage systems and processing paradigms are allowing for ever larger data sets to be collected and analyzed. The high demand for data analytics and rapid development in technologies has led to a sizable ecosystem of big data processing systems. However, the lack of established, standardized benchmarks makes it difficult for users to choose the appropriate systems that suit their requirements. To address this problem, we have developed the BigBench benchmark specification. BigBench is the first end-to-end big data analytics benchmark suite. In this paper, we present the BigBench benchmark and analyze the workload from technical as well as business point of view. We characterize the queries in the workload along different dimensions, according to their functional characteristics, and also analyze their runtime behavior. Finally, we evaluate the suitability and relevance of the workload from the point of view of enterprise applications, and discuss potential extensions to the proposed specification in order to cover typical big data processing use cases

    Other authors
    See publication
  • Introducing TPCx-HS: The First Industry Standard for Benchmarking Big Data Systems

    Springer Verlag

    The designation Big Data has become a mainstream buzz phrase across many industries as well as research circles. Today many companies are making performance claims that are not easily verifiable and comparable in the absence of a neutral industry benchmark. Instead one of the test suites used to compare performance of Hadoop based Big Data systems is the TeraSort. While it nicely defines the data set and tasks to measure Big Data Hadoop systems it lacks a formal specification and enforcement…

    The designation Big Data has become a mainstream buzz phrase across many industries as well as research circles. Today many companies are making performance claims that are not easily verifiable and comparable in the absence of a neutral industry benchmark. Instead one of the test suites used to compare performance of Hadoop based Big Data systems is the TeraSort. While it nicely defines the data set and tasks to measure Big Data Hadoop systems it lacks a formal specification and enforcement rules that enable the comparison of results across systems. In this paper we introduce TPCx-HS, the industry’s first industry standard benchmark, designed to stress both hardware and software that is based on Apache HDFS API compatible distributions. TPCx-HS extends the workload defined in TeraSort with formal rules for implementation, execution, metric, result verification, publication and pricing. It can be used to asses a broad range of system topologies and implementation methodologies of Big Data Hadoop systems in a technically rigorous and directly comparable and vendor-neutral manner.

    Other authors
    See publication
  • YCSB+T: Benchmarking Web-Scale Transactional Databases

    IEEE International Conference on Data Engineering Workshops 2014 (CloudDB)

    Other authors
    See publication
  • YCSB+T: Benchmarking Web-Scale Transactional Databases

    IEEE International Conference on Data Engineering Workshops 2014 (CloudDB)

    Other authors
    See publication
  • Benchmarking Big Data - TPC Initiatives

    NIST Data Science Symposium Proceedings

    Industry standard benchmarks have played, and continue to play a crucial role in the advancement of the
    computing industry. Demands for them have existed since buyers were first confronted with the choice between purchasing one system over another. Historically we have seen that industry standard benchmarks enabled healthy competition that results in product improvements and the evolution of brand technologies. Now, Big Data has become an integral part of mainstream IT ecosystem across all…

    Industry standard benchmarks have played, and continue to play a crucial role in the advancement of the
    computing industry. Demands for them have existed since buyers were first confronted with the choice between purchasing one system over another. Historically we have seen that industry standard benchmarks enabled healthy competition that results in product improvements and the evolution of brand technologies. Now, Big Data has become an integral part of mainstream IT ecosystem across all verticals. Industry and research community are challenged with effective means to measure the performance and price-performance hardware and software dealing with big data. Considering the importance the Transaction Processing Performance council (TPC.org) has formed a committee to develop set of industry standards to measure these aspects. This session presents the status a report from this committee

    See publication
  • Book (5) : Performance Characterization and Benchmarking

    Springer

    The papers present novel ideas and methodologies in performance evaluation, measurement, and characterization.

    Other authors
    See publication
  • Data Management - A Look Back and a Look Ahead

    Springer (LNCS)

    he essence of data management is to store, manage and process data. In 1970, E.F. Codd developed the relational data model and the universal data language “SQL” for data access and management. Over the years, relational data management systems have become an integral part of every organization’s data management portfolio. This paper gives an overview of the technology trends in data management, some of the emerging technologies and related challenges and opportunities and eminent convergence of…

    he essence of data management is to store, manage and process data. In 1970, E.F. Codd developed the relational data model and the universal data language “SQL” for data access and management. Over the years, relational data management systems have become an integral part of every organization’s data management portfolio. This paper gives an overview of the technology trends in data management, some of the emerging technologies and related challenges and opportunities and eminent convergence of platforms for efficiency and effectiveness.

    Other authors
    See publication
  • A Review of System Benchmark Standards and a Look Ahead Towards an Industry Standard for Benchmarking Big Data Workloads

    IGI Global

    This chapter looks into techniques to measure the effectiveness of hardware and software platforms dealing with big data.

    Other authors
    See publication
  • A Look at Challenges and Opportunities of Big Data Analytics in Healthcare

    IEEE

    A Look at Challenges and Opportunities of Big Data Analytics in Healthcare presented at the 1st IEEE Big Data 2013 Workshop on BigData In Bioinformatics and Health Care Informatics

    Other authors
    See publication
  • VLDB Industry Vision: Keeping the TPC Relevant!

    VLDB 2013

    VLDB 2013 Industry Vision paper on the crucial role of industry standard benchmarks to the advancement of computing industry.

    Other authors
    See publication
  • Benchmarking Big Data Systems and the BigData Top100 List

    ISSN: 2167-6461. Mary Ann Liebert, Inc.

    Published by Mary Ann Liebert, Inc in the its inaugural issue of the Big Data Journal announced at the Strata conference 2013. This is article describe a community-based effort for defining a big data benchmark.

    Other authors
    See publication
  • Benchmark Roadmap 2012

    Springer Verlag

    Historically known for database-centric standards, the TPC is now developing standards for consolidation using virtualization technologies and multi-source data integration, and exploring new ideas such as Big Data and Big Data Analytics to keep pace with rapidly changing industry demands. This paper gives a high level overview of the current state of the TPC in terms of existing standards, standards under development and future outlook.

    Other authors
    See publication
  • Setting the Direction for Big Data Benchmark Standards

    Springer Verlag

    The Workshop on Big Data Benchmarking (WBDB2012), held on May 8-9, 2012 in San Jose, CA, served as an incubator for several promising approaches to define a big data benchmark standard for industry. Through an open forum for discussions on a number of issues related to big data benchmarking—including definitions of big data terms, benchmark processes and auditing — the attendees were able to extend their own view of big data benchmarking as well as communicate their own ideas, which ultimately…

    The Workshop on Big Data Benchmarking (WBDB2012), held on May 8-9, 2012 in San Jose, CA, served as an incubator for several promising approaches to define a big data benchmark standard for industry. Through an open forum for discussions on a number of issues related to big data benchmarking—including definitions of big data terms, benchmark processes and auditing — the attendees were able to extend their own view of big data benchmarking as well as communicate their own ideas, which ultimately led to the formation of small working groups to continue collaborative work in this area. In this paper, we summarize the discussions and outcomes from this first workshop, which was attended by about 60 invitees representing 45 different organizations, including industry and academia. Workshop attendees were selected based on their experience and expertise in the areas of management of big data, database systems, performance benchmarking, and big data applications. There was consensus among participants about both the need and the opportunity for defining benchmarks to capture the end-to-end aspects of big data applications. Following the model of TPC benchmarks, it was felt that big data benchmarks should not only include metrics for performance, but also price/performance, along with a sound foundation for fair comparison through audit mechanisms. Additionally, the benchmarks should consider several costs relevant to big data systems including total cost of acquisition, setup cost, and the total cost of ownership, including energy cost. The second Workshop on Big Data Benchmarking will be held in December 2012 in Pune, India, and the third meeting is being planned for July 2013 in Xi’an, China.

    Other authors
    See publication
  • Big Data Benchmarking

    ACM, 9th ACM International Conference on Autonomic Computing

    Discussions on the outcomes from the Workshop on Big Data Benchmarking (WBDB2012) held on May 8-9, 2012 in San Jose, CA at the 9th ACM International Conference on Autonomic Computing (MBDS'12)

    Other authors
    See publication
  • Cisco UCS Ecosystem for Oracle: Extend Support to Big Data

    Cisco

    NoSQL has emerged as an increasingly important part of big data trends for applications that demand large volumes of simple reads and updates against very large datasets. Cisco and Oracle are the first vendors collaborating to deliver enterprise-class NoSQL solutions. Exceptional performance, scalability, availability and manageability are made possible by the combination of the Cisco Unified Computing System (UCS) and Oracle NoSQL Database.

    Other authors
  • Book (3) : Topics in Performance Evaluation, Measurement and Characterization

    Springer

    Topics in Performance Evaluation, Measurement and Characterization

    Other authors
    See publication
  • Metrics for Measuring the Performance of the Mixed Workload

    Springer

    Advances in hardware architecture have begun to enable database vendors to process analytical queries directly on operational database systems without impeding the performance of mission-critical transaction processing too much. In order to evaluate such systems, we recently devised the mixed workload CH-benCHmark, which combines transactional load based on TPC-C order processing with decision support load based on TPC-H-like query suite run in parallel on the same tables in a single database…

    Advances in hardware architecture have begun to enable database vendors to process analytical queries directly on operational database systems without impeding the performance of mission-critical transaction processing too much. In order to evaluate such systems, we recently devised the mixed workload CH-benCHmark, which combines transactional load based on TPC-C order processing with decision support load based on TPC-H-like query suite run in parallel on the same tables in a single database system.

    Other authors
    See publication
  • Shaping the Landscape of Industry Standard Benchmarks

    Springer

    Established in 1988, the Transaction Processing Performance Council (TPC) has had a significant impact on the computing industry’s use of industry-standard benchmarks. In this paper, the authors look at the contributions of the Transaction Processing Performance Council in shaping the landscape of industry standard benchmarks – from defining the fundamentals like performance, price for performance, and energy efficiency, to creating standards for independently auditing and reporting various…

    Established in 1988, the Transaction Processing Performance Council (TPC) has had a significant impact on the computing industry’s use of industry-standard benchmarks. In this paper, the authors look at the contributions of the Transaction Processing Performance Council in shaping the landscape of industry standard benchmarks – from defining the fundamentals like performance, price for performance, and energy efficiency, to creating standards for independently auditing and reporting various aspects of the systems under test.

    Other authors
    See publication
  • Transactional Key-Value Storage: Super Simple, Super Fast, Super Flexible

    Cisco

    The Oracle NoSQL database provides optimized distributed, highly available keyvalue storage for large-volume, latency-sensitive applications and web services. It can also provide fast, reliable, distributed storage to applications that need to integrate with extract, transform, and load (ETL) processing. Cisco and Oracle have partnered to deliver tested and certified solutions that help reduce risk when deploying Big Data solutions.

  • The Mixed Workload CH-benCHmark

    ACM/DBTest

    While standardized and widely used benchmarks address either operational or real-time Business Intelligence (BI) workloads, the lack of a hybrid benchmark led us to the definition of a new, complex, mixed workload benchmark, called mixed workload CH-benCHmark. This benchmark bridges the gap between the established single-workload suites of TPC-C for OLTP and TPC-H for OLAP, and executes a complex mixed workload: a transactional workload based
    on the order entry processing of TPC-C and a…

    While standardized and widely used benchmarks address either operational or real-time Business Intelligence (BI) workloads, the lack of a hybrid benchmark led us to the definition of a new, complex, mixed workload benchmark, called mixed workload CH-benCHmark. This benchmark bridges the gap between the established single-workload suites of TPC-C for OLTP and TPC-H for OLAP, and executes a complex mixed workload: a transactional workload based
    on the order entry processing of TPC-C and a corresponding TPC-H-equivalent OLAP query suite run in parallel on the same tables in a single database system. As it is derived from these two most widely used TPC benchmarks, the CH-benCHmark produces results highly relevant to both hybrid and classic single-workload systems.

    Other authors
    See publication
  • Information Explosion: A Storage Perspective

    NSF

    As the scale of data has grown from the order of megabytes to the order of petabytes, energy consumption and power provisioning have become a key consideration in managing (i.e., analyzing and storing) such large datasets. Traditional data processing methods have largely ignored incorporating these energy costs in planning, provisioning and processing of data processing and data management tasks.

    See publication
  • Optimizing for Energy Efficiency

    ACM, SPEC

    Historically compute server performance has been the most important pillar in the evaluation of datacenter efficiency, which can be measured using a variety of industry standard benchmarks. With the introduction of industry standard servers, price-performance became the second pillar in the „efficiency equation‟. Today with an increased awareness in the industry for power optimized designs and corporate initiatives to reduce carbon emissions, data center efficiency needs to incorporate yet…

    Historically compute server performance has been the most important pillar in the evaluation of datacenter efficiency, which can be measured using a variety of industry standard benchmarks. With the introduction of industry standard servers, price-performance became the second pillar in the „efficiency equation‟. Today with an increased awareness in the industry for power optimized designs and corporate initiatives to reduce carbon emissions, data center efficiency needs to incorporate yet another key element in this equation: energy efficiency. Initial models based on „name-plate‟ power consumption have been used to estimate energy efficiency [3][6][8] while recently industry standard consortia like SPEC, TPC and SPC have started amalgating new energy metrics with their traditional performance metrics. TPC-Energy, enables the measuring and reporting of energy efficiency for transaction processing systems and decision support systems [17]. In this paper we analyze TPC-C benchmark configurations that may achieve leadership results in TPC-Energy using existing, more energy efficient technologies, such as solid states drives for storage subsystems, low power processors and high density DRAM in back end server and middle tier systems. Even though the study is based on TPC-C configurations these configuration optimizations are applicable to other benchmarks and production systems alike. We envision that the energy efficiency metrics and related optimizations to claim benchmark leadership will accelerate development and qualifications of energy efficient component and solutions.

    Other authors
    See publication
  • Power Based Performance and Capacity Estimation Models for Enterprise Information Systems

    IEEE Special Issue on Energy Aware Big Data Processing

    Historically, the performance and purchase price of enterprise information systems have been the key arguments in purchasing decisions.. With rising energy costs and increasing power use due to the ever-growing demand for compute capacity (servers, storage, networks etc.), electricity bills have become a significant expense for today’s data centers. In the very near future, energy efficiency is expected to be one of the key purchasing arguments. Having realized this trend, the Transaction…

    Historically, the performance and purchase price of enterprise information systems have been the key arguments in purchasing decisions.. With rising energy costs and increasing power use due to the ever-growing demand for compute capacity (servers, storage, networks etc.), electricity bills have become a significant expense for today’s data centers. In the very near future, energy efficiency is expected to be one of the key purchasing arguments. Having realized this trend, the Transaction Processing Performance Council has developed the TPC-Energy specification. It defines a standard methodology for measuring, reporting and fairly comparing power consumption of enterprise information systems. Wide industry adaption of TPC-Energy requires a large body of benchmark publications across multiple software and hardware platforms, which could take several years. Meanwhile, we believe that analytical power estimates based on nameplate power is a useful tool for estimating power consumption of TPC benchmark configurations as well as enter-prise information systems. This paper presents enhancements to previously published energy estimation models based on the TPC-C and TPC-H benchmarks from the same authors and a new model, based on the TPC-E benchmark. The models can be applied to estimate power consumption of enterprise OLTP and Decision Support systems.

    Other authors
    See publication
  • Enterprise IO Scalability: LSI WarpDrive and Cisco UCS C-Series Servers

    Cisco

    In this paper we look at the performance, scalability and energy efficiency aspects of storage systems based on the LSI WarpDrive Acceleration card SLP-300 in Cisco UCS C-Series rack-mounted servers. The study demonstrates that LSI WarpDrive cards combined with Cisco Unified Computing innovations can help to deliver industry leading performance and scalability at fraction of the power footprint of traditional hard disk drive based solutions.

    Other authors
    See publication
  • Book (2) : Performance Evaluation, Measurement and Characterization of Complex Systems

    Springer

    Selected topics in Performance Evaluation, Measurement and Characterization of Complex Systems

    Other authors
    See publication
  • Building Enterprise Class Real-Time Energy Efficient Decision Support Systems

    Springer

    In today’s highly competitive marketplace, companies have an insatiable need for up-to-the-second information about their business’ operational state, while generating Terabytes of data per day. The ability to convert this data into meaningful business information in a timely, cost effective manner is critical to their competitiveness. For many, it is no longer acceptable to move operational data into specialized analytical tools because of the delay this additional step would take. In certain…

    In today’s highly competitive marketplace, companies have an insatiable need for up-to-the-second information about their business’ operational state, while generating Terabytes of data per day. The ability to convert this data into meaningful business information in a timely, cost effective manner is critical to their competitiveness. For many, it is no longer acceptable to move operational data into specialized analytical tools because of the delay this additional step would take. In certain cases they prefer to directly run queries on their operational data. To keep the response time of these queries low while data volume increases, IT departments are forced to buy faster processors or increase the number of processors per system. At the same time they need to scale the I/O subsystem to keep their systems balanced. While processor performance has been doubling every two years in accordance with Moore’s Law, I/O performance is lagging far behind. As a consequence, storage subsystems not only have to cope with the increase in data capacity, but, foremost, with the increase in I/O throughput demand, which is often limited by the disk drive performance and the wire bandwidth between the server and storage.
    A solution to this problem is to scale the I/O subsystem for capacity and to cache the database in main memory for performance. This approach not only reduces the I/O requirements, but also significantly reduces power consumption. As the database is physically located on durable media just like traditional databases, all ACID requirements are met.

    Other authors
    See publication
  • How to Advance TPC Benchmarks with Dependability Aspects

    Springer

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all…

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

    Other authors
    See publication
  • Transaction Performance vs. Moore's Law: A Trend Analysis

    Springer Verlag, Lecture Notes in Computer Science (LNCS)

    Intel co-founder Gordon E. Moore postulated in his famous 1965 paper that the number of components in integrated circuits had doubled every year from their invention in 1958 until 1965, and then predicted that the trend would continue for at least ten years. Later, David House, an Intel colleague, after factoring in the increase in performance of transistors, concluded that integrated circuits would double in performance every 18 months. Despite this trend in microprocessor improvements, your…

    Intel co-founder Gordon E. Moore postulated in his famous 1965 paper that the number of components in integrated circuits had doubled every year from their invention in 1958 until 1965, and then predicted that the trend would continue for at least ten years. Later, David House, an Intel colleague, after factoring in the increase in performance of transistors, concluded that integrated circuits would double in performance every 18 months. Despite this trend in microprocessor improvements, your favored text editor continues to take the same time to start and your PC takes pretty much the same time to reboot as it took 10 years ago. Can this observation be made on systems supporting the fundamental aspects of our information based economy, namely transaction processing systems?
    For over two decades the Transaction Processing Performance Council (TPC) has been very successful in disseminating objective and verifiable performance data to the industry. During this period the TPC’s flagship benchmark, TPC-C, which simulates Online Transaction Processing (OLTP) Systems has produced over 750 benchmark publications across a wide range of hardware and software platforms representing the evolution of transaction processing systems. TPC-C results have been published by over two dozen unique vendors and over a dozen database platforms, some of them exist, others went under or were acquired. But TPC-C survived. Using this large benchmark result set, we discuss a comparison of TPC-C performance and price-performance to Moore’s Law.

    Other authors
    See publication
  • Transaction Processing Performance Council (TPC): State of the Council 2010

    Springer

    The Transaction Processing Performance Council (TPC) is a non-profit corporation founded to define transaction processing and database benchmarks and to disseminate objective, verifiable performance data to the industry. Established in August 1988, the TPC has been integral in shaping the landscape of modern transaction processing and database benchmarks over the past twenty-two years. This paper provides an overview of the TPC’s existing benchmark standards and specifications, introduces two…

    The Transaction Processing Performance Council (TPC) is a non-profit corporation founded to define transaction processing and database benchmarks and to disseminate objective, verifiable performance data to the industry. Established in August 1988, the TPC has been integral in shaping the landscape of modern transaction processing and database benchmarks over the past twenty-two years. This paper provides an overview of the TPC’s existing benchmark standards and specifications, introduces two new TPC benchmarks under development, and examines the TPC’s active involvement in the early creation of additional future benchmarks

    Other authors
    See publication
  • Benchmarks Meet Industry Demands

    Data Centre Management Magazine

    The TPC’s most recent benchmark development effort is in the area of virtualization. The TPC-Virtualization Work Group was formed to respond to the growth in virtualization technology and capture some of the common uses of virtualization with database workloads in a rigorous and independent benchmark for virtual environments.

    Other authors
    See publication
  • Energy Benchmarks: A Detailed Analysis

    ACM SIGCOMM

    In light of an increase in energy cost and energy consciousness industry standard organizations such as Transaction Processing Performance Council (TPC), Standard Performance Evaluation Corporation (SPEC) and Storage Performance Council (SPC) as well as the U.S. Environmental Protection Agency have developed tests to measure energy consumption of computer systems. Although all of these consortia aim at standardizing power consumption measurement using benchmarks, ultimately aiming to reduce…

    In light of an increase in energy cost and energy consciousness industry standard organizations such as Transaction Processing Performance Council (TPC), Standard Performance Evaluation Corporation (SPEC) and Storage Performance Council (SPC) as well as the U.S. Environmental Protection Agency have developed tests to measure energy consumption of computer systems. Although all of these consortia aim at standardizing power consumption measurement using benchmarks, ultimately aiming to reduce overall power consumption, and to aid in making purchase decisions, their methodologies differ slightly. For instance, some organizations developed specialized benchmarks while others added energy metrics to existing benchmarks. In this paper we give a comprehensive overview of the currently available energy benchmarks followed by an in depth analysis of their commonalities and differences.

    Other authors
    • Meikel Poess
    • John M. Stephens, Jr.
    • Karl Huppler
    • Evan Haines
    See publication
  • Tuning Servers, Storage and Database for Energy Efficient Data Warehouses

    IEEE

    Undoubtedly, reducing power consumption is at the top of the priority list for system vendors, data center managers who are challenged by customers, analysts, and government agencies to implement green initiatives. Hardware and software vendors have developed an array of power preserving techniques. On-demand-driven clock speeds for processors, energy efficient power supplies, and operating-system-controlled dynamic power modes are just a few hardware examples. Software vendors have contributed…

    Undoubtedly, reducing power consumption is at the top of the priority list for system vendors, data center managers who are challenged by customers, analysts, and government agencies to implement green initiatives. Hardware and software vendors have developed an array of power preserving techniques. On-demand-driven clock speeds for processors, energy efficient power supplies, and operating-system-controlled dynamic power modes are just a few hardware examples. Software vendors have contributed to energy efficiency by implementing power efficient coding methods, such as advanced compression and enabling applications to take advantage of large memory caches. However, adoption of these power-preserving technologies in data centers is not straightforward, especially, for large, complex applications such as data warehouses. Data warehouse workloads typically have oscillating resource utilizations, which makes identifying the largest power consumers difficult. Most importantly, while preserving power remains a critical consideration, performance and availability goals must still be met with systems using power-preserving technologies. This paper evaluates the tradeoffs between existing power-saving techniques and their performance impact on data warehouse applications. Our analysis will guide system developers and data center managers in making informed decisions regarding adopting power-preserving techniques.

    Other authors
    See publication
  • A Power Consumption Analysis of Decision Support Systems

    ACM

    Enterprise data warehouses have been doubling every three years, demanding high compute power and storage capacities. The in-dustry is expected to meet such compute demands, but dealing with the dramatic increase in energy requirements will be challenging. Energy efficiency has already become the top priority for system developers and data center managers. While system vendors focus on developing energy efficient systems there is a huge demand for industry-standard workloads and processes to…

    Enterprise data warehouses have been doubling every three years, demanding high compute power and storage capacities. The in-dustry is expected to meet such compute demands, but dealing with the dramatic increase in energy requirements will be challenging. Energy efficiency has already become the top priority for system developers and data center managers. While system vendors focus on developing energy efficient systems there is a huge demand for industry-standard workloads and processes to measure and analyze energy consumption for enterprise data warehouses. SPEC has developed a power benchmark for single servers (SPECpower_ssj2008), but so far, no benchmark exists that measures the power consumption of large, complex systems. In this paper, we present a simple power consumption model for enterprise data warehouses based on the industry standard TPC-H benchmark. By applying our model to a subset of 7 years of TPC-H publications, we identify the most power-intensive components where research and development should focus and also analyze existing power consumption trends over time. This paper com-plements a similar study conducted for enterprise OLTP systems published by the same authors at VLDB 2008 and the Transaction Processing Performance Council's initiative of energy metric to its benchmarks.

    Other authors
    See publication
  • Database Are Not Toasters: A Framework for Comparing Data Warehouse Appliances

    Springer

    The success of Business Intelligence (BI) applications depends on two factors, the ability to analyze data ever more quickly and the ability to handle ever increasing volumes of data. Data Warehouse (DW) and Data Mart (DM) installations that support BI applications have historically been built using traditional architectures either designed from the ground up or based on customized reference system designs. The advent of Data Warehouse Appliances (DA) brings packaged software and hardware…

    The success of Business Intelligence (BI) applications depends on two factors, the ability to analyze data ever more quickly and the ability to handle ever increasing volumes of data. Data Warehouse (DW) and Data Mart (DM) installations that support BI applications have historically been built using traditional architectures either designed from the ground up or based on customized reference system designs. The advent of Data Warehouse Appliances (DA) brings packaged software and hardware solutions that address performance and scalability requirements for certain market segments. The differences between DAs and custom installations make direct comparisons between them impractical and suggest the need for a targeted DA benchmark. In this paper we review data warehouse appliances by surveying thirteen products offered today. We assess the common characteristics among them and propose a classification for DA offerings. We hope our results will help define a useful benchmark for DAs.

    Other authors
    See publication
  • Transaction Processing Performance Council (TPC): Twenty Years Later – A Look Back, A Look Ahead

    Springer

    The Transaction Processing Performance Council (TPC) is a nonprofit corporation founded to define transaction processing and database benchmarks and to disseminate objective, verifiable TPC performance data to the industry. Established in August 1988, the TPC has been integral in shaping the landscape of modern transaction processing and database benchmarks over the past twenty years. Today, the TPC is developing an energy efficiency metric and a new ETL benchmark, as well as investigating new…

    The Transaction Processing Performance Council (TPC) is a nonprofit corporation founded to define transaction processing and database benchmarks and to disseminate objective, verifiable TPC performance data to the industry. Established in August 1988, the TPC has been integral in shaping the landscape of modern transaction processing and database benchmarks over the past twenty years. Today, the TPC is developing an energy efficiency metric and a new ETL benchmark, as well as investigating new areas for benchmark development in 2010 and beyond.

    Other authors
    See publication
  • Transaction Processing Performance Council (TPC): Twenty Years Later – A Look Back, A Look Ahead

    Springer

    The Transaction Processing Performance Council (TPC) is a nonprofit corporation founded to define transaction processing and database benchmarks and to disseminate objective, verifiable TPC performance data to the industry. Established in August 1988, the TPC has been integral in shaping the landscape of modern transaction processing and database benchmarks over the past twenty years. Today, the TPC is developing an energy efficiency metric and a new ETL benchmark, as well as investigating new…

    The Transaction Processing Performance Council (TPC) is a nonprofit corporation founded to define transaction processing and database benchmarks and to disseminate objective, verifiable TPC performance data to the industry. Established in August 1988, the TPC has been integral in shaping the landscape of modern transaction processing and database benchmarks over the past twenty years. Today, the TPC is developing an energy efficiency metric and a new ETL benchmark, as well as investigating new areas for benchmark development in 2010 and beyond.

    Other authors
    See publication
  • Book (1) : Performance Evaluation and Benchmarking

    Springer

    Selected topics in Performance Evaluation and Benchmarking

    Other authors
    See publication
  • Million Queries Per Hour

    HP

    Industry’s first-ever blade system cluster to break 1 Million queries per hour barrier.

    Other authors
    See publication
  • Energy Cost, The Key Challenge of Today's Data Centers: A Power Consumption Analysis

    VLDB

    Historically, performance and price-performance of computer systems have been the key purchasing arguments for customers. With rising energy costs and increasing power use due to the ever-growing demand for computing power (servers, storage, net-works), electricity bills have become a significant expense for today’s data centers. In the very near future, energy efficiency is expected to be one of the key purchasing arguments. Some per-formance organizations, such as SPEC, have developed power…

    Historically, performance and price-performance of computer systems have been the key purchasing arguments for customers. With rising energy costs and increasing power use due to the ever-growing demand for computing power (servers, storage, net-works), electricity bills have become a significant expense for today’s data centers. In the very near future, energy efficiency is expected to be one of the key purchasing arguments. Some per-formance organizations, such as SPEC, have developed power benchmarks for single servers (SPECpower_ssj2008), but so far, no benchmark exists that measures the power consumption of transaction processing systems. In this paper, we develop a power consumption model based on data readily available in the TPC-C full disclosure report of published benchmarks. We verify our model with measurements taken from three fully scaled and opti-mized TPC-C configurations including client (middle-tier) systems, database server, and storage subsystem. By applying this model to a subset of 7 years of TPC-C results, we identify the most power-intensive components and demonstrate the existing power consumption trends over time. Assuming similar trends in the future, the hardware enhancements alone will not be able to satisfy the demand for energy efficiency. In its outlook, this paper looks at potential hardware and software enhancements to meet the energy efficiency demands of future systems. Realizing the importance of energy efficiency, the Transaction Processing Performance Council (TPC) has formed a working group to look into adding energy efficiency metrics to all its benchmarks. This paper is expected to complement this initiative.

    Other authors
    See publication
  • Why You Should Run TPC-DS: A Workload Analysis

    VLDB

    TPC-DS is intended to provide a fair comparison of various vendor implementations by providing highly comparable, controlled and repeatable tasks in evaluating the performance of decision support systems (DSS). Its workload is expected to test the upward boundaries of hardware system performance in the areas of CPU utilization, memory utilization, I/O subsystem utilization and the ability of the operating system and database software to perform various complex functions important to DSS -…

    TPC-DS is intended to provide a fair comparison of various vendor implementations by providing highly comparable, controlled and repeatable tasks in evaluating the performance of decision support systems (DSS). Its workload is expected to test the upward boundaries of hardware system performance in the areas of CPU utilization, memory utilization, I/O subsystem utilization and the ability of the operating system and database software to perform various complex functions important to DSS - examine large volumes of data, compute and execute the best
    execution plan for queries with a high degree of complexity, schedule efficiently a large number of user sessions, and give answers to critical business questions

    Other authors
    See publication
  • Large Scale Data Warehouses on Grid

    VLDB

    Grid computing has the potential of drastically changing enterprise computing as we know it today. The main concept of Grid computing is to see computing as a utility. It should not matter where data resides, or what computer processes a task. This concept has been applied successfully to academic research. It also has many advantages for commercial data warehouse applications such as virtualization, flexible provisioning, reduced cost due to commodity hardware, high availability and high…

    Grid computing has the potential of drastically changing enterprise computing as we know it today. The main concept of Grid computing is to see computing as a utility. It should not matter where data resides, or what computer processes a task. This concept has been applied successfully to academic research. It also has many advantages for commercial data warehouse applications such as virtualization, flexible provisioning, reduced cost due to commodity hardware, high availability and high scale-out. In this paper we show how a large-scale, high performing and scalable Grid based data warehouse can be implemented using commodity hardware, Oracle Database and Linux operating system.

    Other authors
    See publication
  • The Making of TPC-DS

    VLDB

    For the last decade, the research community and the industry have
    used TPC-D and its successor TPC-H to evaluate performance of
    decision support technology. Recognizing a paradigm shift in the
    industry the Transaction Processing Performance Council has developed a new Decision Support benchmark, TPC-DS, expected to be released this year. From an ease of benchmarking perspective it is similar to past benchmarks. However, it adjusts for new technology and new approaches the industry has…

    For the last decade, the research community and the industry have
    used TPC-D and its successor TPC-H to evaluate performance of
    decision support technology. Recognizing a paradigm shift in the
    industry the Transaction Processing Performance Council has developed a new Decision Support benchmark, TPC-DS, expected to be released this year. From an ease of benchmarking perspective it is similar to past benchmarks. However, it adjusts for new technology and new approaches the industry has embarked on in recent years. This paper describes the main characteristics of TPC-DS, explains why some of the key decisions were made and w

    Other authors
    See publication
  • Storage Configuration Guidelines for Data Warehouse Applications

    Business Intelligence Journal, Volume 6, Number 3

    Storage Configuration Guidelines for Data Warehouse Applications

    Other authors
  • Enterprise Scalability on Clustered Servers

    OTN

    Oracle and HP have collaborated to fully optimize Oracle10g for HP integrity Servers. This partnership not only delivers record setting benchmarks but also ensures customers the highest performance and cost effective Oracle and HP database systems.

    Other authors
    See publication
  • Million Transactions Per Minute

    HP

    Industry's first-ever TPC-C benchmark cluster to break 1 Million transactions per minute barrier.

    Other authors
    See publication

Patents

  • Adaptive datacenter topology for distributed frameworks job control through network awareness

    US 9,785,522

  • Adaptive resource allocation in a large container/cloud deployment with infra aware application scheduling

    US 1003723

  • Allocating resources for multi-phase, distributed computing jobs

    US 9,489,225

  • Annotation of network activity through different phases of execution

    US 20160011925

  • Distributed application framework for prioritizing network traffic using application priority awareness

    US 9,825,878

  • Distributed application framework that uses network and application awareness for placing data

    US 20160234071

  • Infrastructure aware adaptive resource allocation

    US 1003501-US.02

  • Infrastructure aware query optimization

    US 1003743-US.01

  • Network traffic management using heat maps

    US 20160013990

  • Network traffic management using heat maps with actual and planned /estimated metrics

    US 20160013990

  • Next generation storage controller in hybrid cloud

    US 1012109

  • Next generation storage controller in hybrid environments

    US 15/826801

  • Optimized Hadoop task scheduler in an optimally placed virtualized hadoop cluster using network cost optimizations

    US 9367344 B2

  • Optimized assignments and/or generation virtual machine for reducer tasks

    US 9,367,344

  • Optimizing placement of virtual machines

    US 9,769,084

  • Orchestrating micro-service deployment based on network policy health

    US 00

  • Orchestrating micro-service deployment based on network policy health

    US 1003902

  • Task scheduling using virtual clusters

    US 9,485,197

More activity by Raghu

View Raghu’s full profile

  • See who you know in common
  • Get introduced
  • Contact Raghu directly
Join to view full profile

People also viewed

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Others named Raghu Nambiar