Jump to content

Software regression: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Url to "Locating Regression Bugs" pointed to a wrong paper
No edit summary
 
(11 intermediate revisions by 8 users not shown)
Line 1: Line 1:
{{Short description|Type of software bug}}
{{Unreliable sources|date=December 2019}}
A '''software regression''' is a type of [[software bug]] where a feature that has worked before stops working. This may happen after a certain event, such as a system upgrade, [[Patch (computing)|system patching]] or a change to [[daylight saving time]].<ref name=ibm-locating-bugs>{{cite conference |last1=Yehudai|first1=Amiram |last2=Tyszberowicz|first2=Shmuel |last3=Nir|first3=Dor|title=Locating Regression Bugs|url=https://link.springer.com/chapter/10.1007/978-3-540-77966-7_18#:~:text=Abstract,patch%20causes%20the%20regression%20bug.|conference=Haifa Verification Conference|conference-url=https://www.research.ibm.com/haifa/conferences/hvc2017/previous.shtml|year=2007|accessdate=10 March 2018}}</ref> A '''software performance regression''' is a situation where the software still functions correctly, but performs more slowly or uses more memory or resources than before.<ref>{{Cite journal|last=Shang|first=Weiyi|last2=Hassan|first2=Ahmed E.|last3=Nasser|first3=Mohamed|last4=Flora|first4=Parminder|date=11 December 2014|title=Automated Detection of Performance Regressions Using Regression Models on Clustered Performance Counters|url=https://sail.cs.queensu.ca/Downloads/ICPE2015_AutomatedDetectionofPerformanceRegressionsUsingRegressionModelsOnClusteredPerformanceCounters.pdf#page=1}}</ref>
A '''software regression''' is a type of [[software bug]] where a feature that has worked before stops working. This may happen after changes are applied to the software's [[source code]], including the addition of new [[software feature|features]] and bug fixes.<ref name=wong-issre-97>{{cite book |last1=Wong |first1=W. Eric |last2=Horgan |first2=J.R. |last3=London |first3=Saul |last4=Agrawal |first4=Hira |title=Proceedings of the Eighth International Symposium on Software Reliability Engineering (ISSRE 97) |date=1997 |publisher=IEEE |isbn=0-8186-8120-9 |url=https://ieeexplore.ieee.org/document/630875 |chapter=A Study of Effective Regression Testing in Practice|doi=10.1109/ISSRE.1997.630875 |s2cid=2911517 }}</ref> They may also be introduced by changes to the environment in which the software is running, such as system upgrades, [[Patch (computing)|system patching]] or a change to [[daylight saving time]].<ref name=ibm-locating-bugs>{{cite conference |last1=Yehudai|first1=Amiram |last2=Tyszberowicz|first2=Shmuel |last3=Nir|first3=Dor|title=Locating Regression Bugs|url=https://link.springer.com/chapter/10.1007/978-3-540-77966-7_18#:~:text=Abstract,patch%20causes%20the%20regression%20bug.|conference=Haifa Verification Conference|conference-url=https://www.research.ibm.com/haifa/conferences/hvc2017/previous.shtml|year=2007|doi=10.1007/978-3-540-77966-7_18 |accessdate=10 March 2018}}</ref> A '''software performance regression''' is a situation where the software still functions correctly, but performs more slowly or uses more memory or resources than before.<ref>{{Cite journal|last1=Shang|first1=Weiyi|last2=Hassan|first2=Ahmed E.|last3=Nasser|first3=Mohamed|last4=Flora|first4=Parminder|date=11 December 2014|title=Automated Detection of Performance Regressions Using Regression Models on Clustered Performance Counters|url=https://sail.cs.queensu.ca/Downloads/ICPE2015_AutomatedDetectionofPerformanceRegressionsUsingRegressionModelsOnClusteredPerformanceCounters.pdf#page=1|access-date=22 December 2019|archive-date=13 January 2021|archive-url=https://web.archive.org/web/20210113231136/https://sail.cs.queensu.ca/Downloads/ICPE2015_AutomatedDetectionofPerformanceRegressionsUsingRegressionModelsOnClusteredPerformanceCounters.pdf#page=1|url-status=dead}}</ref> Various types of software regressions have been identified in practice, including the following:<ref>{{cite book |last1=Henry |first1=Jean-Jacques Pierre |title=The Testing Network: An Integral Approach to Test Activities in Large Software Projects |date=2008 |publisher=Springer Science & Business Media |isbn=978-3540785040 |page=74}}</ref>
* ''Local'' – a change introduces a new bug in the changed module or component.
* ''Remote'' – a change in one part of the software breaks functionality in another module or component.
* ''Unmasked'' – a change unmasks an already existing bug that had no effect before the change.


Regressions are often caused by [[Hotfix|encompassed bug fixes]] included in [[software patch]]es. One approach to avoiding this kind of problem is [[regression testing]]. A properly designed [[test plan]] aims at preventing this possibility before releasing any software.<ref>{{cite book |last=Richardson |first=Jared |author2=Gwaltney, William Jr |title=Ship It! A Practical Guide to Successful Software Projects |url=https://archive.org/details/shipitpracticalg0000rich/page/32 |year=2006 |publisher=The Pragmatic Bookshelf |location=Raleigh, NC |pages=[https://archive.org/details/shipitpracticalg0000rich/page/32 32, 193] |isbn=978-0-9745140-4-8 }}</ref> [[Automated testing]] and well-written [[test case]]s can reduce the likelihood of a regression.
Regressions are often caused by [[Hotfix|encompassed bug fixes]] included in [[software patch]]es. One approach to avoiding this kind of problem is [[regression testing]]. A properly designed [[test plan]] aims at preventing this possibility before releasing any software.<ref>{{cite book |last=Richardson |first=Jared |author2=Gwaltney, William Jr |title=Ship It! A Practical Guide to Successful Software Projects |url=https://archive.org/details/shipitpracticalg0000rich/page/32 |year=2006 |publisher=The Pragmatic Bookshelf |location=Raleigh, NC |pages=[https://archive.org/details/shipitpracticalg0000rich/page/32 32, 193] |isbn=978-0-9745140-4-8 }}</ref> [[Automated testing]] and well-written [[test case]]s can reduce the likelihood of a regression.


==Prevention and detection==
A software regression can be of one of three types:

* Local – a change introduces a new bug in the changed module or component.
Techniques have been proposed that try to prevent regressions from being introduced into software at various stages of development, as outlined below.
* Remote – a change in one part of the software breaks functionality in another module or component.

* Unmasked – a change unmasks an already existing bug that had no effect before the change.
===Prior to release===

{{main|Regression testing}}

In order to avoid regressions being seen by the [[end-user]] after release, developers regularly run [[regression tests]] after changes are introduced to the software. These tests can include [[unit tests]] to catch local regressions as well as [[integration tests]] to catch remote regressions.<ref>{{cite book |last1=Leung |first1=Hareton K.N. |last2=White |first2=Lee |title=Proceedings of the International Conference on Software Maintenance |date=November 1990 |publisher=IEEE |location=San Diego, CA, USA |isbn=0-8186-2091-9 |url=https://ieeexplore.ieee.org/document/131377 |chapter=A study of integration testing and software regression at the integration level|doi=10.1109/ICSM.1990.131377 |s2cid=62583582 }}</ref> Regression testing techniques often leverage existing test cases to minimize the effort involved in creating them.<ref>{{cite journal |last1=Rothermel |first1=Gregg |last2=Harrold |first2=Mary Jean |last3=Dedhia |first3=Jeinay |title=Regression test selection for C++ software |journal=Software Testing, Verification and Reliability |date=2000 |volume=10 |issue=2 |pages=77–109 |doi=10.1002/1099-1689(200006)10:2<77::AID-STVR197>3.0.CO;2-E |url=https://onlinelibrary.wiley.com/doi/abs/10.1002/1099-1689(200006)10:2%3C77::AID-STVR197%3E3.0.CO;2-E |language=en |issn=1099-1689}}</ref> However, due to the volume of these existing tests, it is often necessary to select a representative subset, using techniques such as [[Regression_testing#Test_case_prioritization|test-case prioritization]].

For detecting performance regressions, [[software performance testing|software performance tests]] are run on a regular basis, to monitor the response time and resource usage metrics of the software after subsequent changes.<ref>{{cite journal |last1=Weyuker |first1=E.J. |last2=Vokolos |first2=F.I. |title=Experience with performance testing of software systems: issues, an approach, and case study |journal=IEEE Transactions on Software Engineering |date=December 2000 |volume=26 |issue=12 |pages=1147–1156 |doi=10.1109/32.888628 |url=https://ieeexplore.ieee.org/document/888628 |issn=1939-3520}}</ref> Unlike functional regression tests, the results of performance tests are subject to [[variance]] - that is, results can differ between tests due to variance in performance measurements; as a result, a decision must be made on whether a change in performance numbers constitutes a regression, based on experience and end-user demands. Approaches such as [[statistical significance test|statistical significance testing]] and [[change point detection]] are sometimes used to aid in this decision.<ref>{{cite book |last1=Daly |first1=David |last2=Brown |first2=William |last3=Ingo |first3=Henrik |last4=O'Leary |first4=Jim |last5=Bradford |first5=David |title=Proceedings of the International Conference on Performance Engineering |date=20 April 2020 |publisher=Association for Computing Machinery |isbn=978-1-4503-6991-6 |pages=67–75 |url=https://dl.acm.org/doi/abs/10.1145/3358960.3375791 |chapter=The Use of Change Point Detection to Identify Software Performance Regressions in a Continuous Integration System|doi=10.1145/3358960.3375791 |s2cid=211677818 }}</ref>

===Prior to commit===

Since [[debugging]] and localizing the root cause of a software regression can be expensive,<ref>{{cite book |last1=Nistor |first1=Adrian |last2=Jiang |first2=Tian |last3=Tan |first3=Lin |title=Proceedings of the Working Conference on Mining Software Repositories (MSR) |date=May 2013 |pages=237–246 |url=https://ieeexplore.ieee.org/document/6624035 |chapter=Discovering, reporting, and fixing performance bugs|doi=10.1109/MSR.2013.6624035 |isbn=978-1-4673-2936-1 |s2cid=12773088 }}</ref><ref>{{cite journal |last1=Agarwal |first1=Pragya |last2=Agrawal |first2=Arun Prakash |title=Fault-localization techniques for software systems: a literature review |journal=ACM SIGSOFT Software Engineering Notes |date=17 September 2014 |volume=39 |issue=5 |pages=1–8 |doi=10.1145/2659118.2659125 |s2cid=12101263 |url=https://dl.acm.org/doi/abs/10.1145/2659118.2659125 |issn=0163-5948}}</ref> there also exist some methods that try to prevent regressions from being committed into the [[code repository]] in the first place. For example, [[Git]] Hooks enable developers to run test scripts before code changes are committed or pushed to the code repository.<ref>{{cite web |title=Git - Git Hooks |url=https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks |website=git-scm.com |access-date=7 November 2021}}</ref> In addition, [[change impact analysis]] has been applied to software to predict the impact of a code change on various components of the program, and to supplement test case selection and prioritization.<ref>{{cite journal |last1=Orso |first1=Alessandro |last2=Apiwattanapong |first2=Taweesup |last3=Harrold |first3=Mary Jean |title=Leveraging field data for impact analysis and regression testing |journal=ACM SIGSOFT Software Engineering Notes |date=1 September 2003 |volume=28 |issue=5 |pages=128–137 |doi=10.1145/949952.940089 |url=https://dl.acm.org/doi/abs/10.1145/949952.940089 |issn=0163-5948}}</ref><ref>{{cite book |last1=Qu |first1=Xiao |last2=Acharya |first2=Mithun |last3=Robinson |first3=Brian |title=Proceedings of the International Conference on Software Maintenance |date=September 2012 |pages=129–138 |url=https://ieeexplore.ieee.org/document/6405263 |chapter=Configuration selection using code change impact analysis for regression testing|doi=10.1109/ICSM.2012.6405263 |isbn=978-1-4673-2312-3 |s2cid=14928793 }}</ref> [[Lint (software)|Software linters]] are also often added to commit hooks to ensure consistent coding style, thereby minimizing stylistic issues that can make the software prone to regressions.<ref>{{cite book |last1=Tómasdóttir |first1=Kristín Fjóla |last2=Aniche |first2=Mauricio |last3=van Deursen |first3=Arie |title=Proceedings of the International Conference on Automated Software Engineering |date=October 2017 |pages=578–589 |url=https://ieeexplore.ieee.org/document/8115668 |chapter=Why and how JavaScript developers use linters|doi=10.1109/ASE.2017.8115668 |isbn=978-1-5386-2684-9 |s2cid=215750004 }}</ref>

==Localization==

Many of the techniques used to find the root cause of non-regression software bugs can also be used to debug software regressions, including [[breakpoint|breakpoint debugging]], print debugging, and [[program slicing]]. The techniques described below are often used specifically to debug software regressions.

===Functional regressions===

A common technique used to localize functional regressions is [[Bisection (software engineering)|bisection]], which takes both a buggy commit and a previously working commit as input, and tries to find the root cause by doing a binary search on the commits in between.<ref>{{cite book |last1=Gross |first1=Thomas |title=Proceedings of the International Workshop on Automatic Debugging |date=10 September 1997 |publisher=Linkøping University Electronic Press |pages=185–191 |url=https://ep.liu.se/en/conference-article.aspx?series=ecp&issue=1&Article_No=15 |language=en |chapter=Bisection Debugging}}</ref> [[Version control]] systems such as Git and [[Mercurial]] provide built-in ways to perform bisection on a given pair of commits.<ref>{{cite web |title=Git - git-bisect Documentation |url=https://git-scm.com/docs/git-bisect |website=git-scm.com |access-date=7 November 2021}}</ref><ref>{{cite web |title=hg - bisect |url=https://www.selenic.com/mercurial/hg.1.html |website=www.selenic.com |publisher=Mercurial |access-date=7 November 2021}}</ref>

Other options include directly associating the result of a regression test with code changes;<ref>{{cite web |title=Reading 11: Debugging |url=https://web.mit.edu/6.005/www/fa15/classes/11-debugging/ |website=web.mit.edu |publisher=MIT}}</ref> setting divergence breakpoints;<ref>{{cite book |last1=Buhse |first1=Ben |last2=Wei |first2=Thomas |last3=Zang |first3=Zhiqiang |last4=Milicevic |first4=Aleksandar |last5=Gligoric |first5=Milos |title=Proceedings of the International Conference on Software Engineering: Companion Proceedings (ICSE-Companion) |date=May 2019 |pages=15–18 |url=https://ieeexplore.ieee.org/document/8802682 |chapter=VeDebug: Regression Debugging Tool for Java|doi=10.1109/ICSE-Companion.2019.00027 |isbn=978-1-7281-1764-5 |s2cid=174799830 }}</ref> or using incremental [[data-flow analysis]], which identifies test cases - including failing ones - that are relevant to a set of code changes,<ref>{{cite book |last1=Taha |first1=A.-B. |last2=Thebaut |first2=S.M. |last3=Liu |first3=S.-S. |title=Proceedings of the Annual International Computer Software & Applications Conference |date=September 1989 |publisher=IEEE |pages=527–534 |url=https://ieeexplore.ieee.org/document/65142 |chapter=An approach to software fault localization and revalidation based on incremental data flow analysis|doi=10.1109/CMPSAC.1989.65142 |isbn=0-8186-1964-3 |s2cid=41978046 }}</ref> among others.

===Performance regressions===

[[Profiling (computer programming)|Profiling]] measures the performance and resource usage of various components of a program, and is used to generate data useful in debugging performance issues. In the context of software performance regressions, developers often compare the [[call stack|call trees]] (also known as "timelines") generated by profilers for both the buggy version and the previously working version, and mechanisms exist to simplify this comparison.<ref>{{cite journal |last1=Ocariza |first1=Frolin S. |last2=Zhao |first2=Boyang |title=Localizing software performance regressions in web applications by comparing execution timelines |journal=Software Testing, Verification and Reliability |date=2021 |volume=31 |issue=5 |pages=e1750 |doi=10.1002/stvr.1750 |s2cid=225416138 |url=https://onlinelibrary.wiley.com/doi/abs/10.1002/stvr.1750 |language=en |issn=1099-1689}}</ref> [[Web development tools]] typically provide developers the ability to record these performance profiles.<ref>{{cite web |title=Analyze runtime performance |url=https://developer.chrome.com/docs/devtools/evaluate-performance/ |website=Chrome Developers |publisher=Google |access-date=7 November 2021 |language=en}}</ref><ref>{{cite web |title=Performance analysis reference - Microsoft Edge Development |url=https://docs.microsoft.com/en-us/microsoft-edge/devtools-guide-chromium/evaluate-performance/reference |website=docs.microsoft.com |publisher=Microsoft |access-date=7 November 2021 |language=en-us}}</ref>

Logging also helps with performance regression localization, and similar to call trees, developers can compare systematically-placed performance logs of multiple versions of the same software.<ref>{{cite book |last1=Yao |first1=Kundi |last2=B. de Pádua |first2=Guilherme |last3=Shang |first3=Weiyi |last4=Sporea |first4=Steve |last5=Toma |first5=Andrei |last6=Sajedi |first6=Sarah |title=Proceedings of the International Conference on Performance Engineering |date=30 March 2018 |publisher=Association for Computing Machinery |isbn=978-1-4503-5095-2 |pages=127–138 |url=https://dl.acm.org/doi/abs/10.1145/3184407.3184416 |chapter=Log4Perf: Suggesting Logging Locations for Web-based Systems' Performance Monitoring|doi=10.1145/3184407.3184416 |s2cid=4557038 }}</ref> A tradeoff exists when adding these performance logs, as adding many logs can help developers pinpoint which portions of the software are regressing at smaller granularities, while adding only a few logs will also reduce overhead when executing the program.<ref>{{cite journal |title=A Qualitative Study of the Benefits and Costs of Logging from Developers' Perspectives |journal=IEEE Transactions on Software Engineering |date=30 January 2020 |doi=10.1109/TSE.2020.2970422 |url=https://ieeexplore.ieee.org/document/8976297|last1=Li |first1=Heng |last2=Shang |first2=Weiyi |last3=Adams |first3=Bram |last4=Sayagh |first4=Mohammed |last5=Hassan |first5=Ahmed E. |volume=47 |issue=12 |pages=2858–2873 |s2cid=213679706 }}</ref>

Additional approaches include writing performance-aware unit tests to help with localization,<ref>{{cite book |last1=Heger |first1=Christoph |last2=Happe |first2=Jens |last3=Farahbod |first3=Roozbeh |title=Proceedings of the International Conference on Performance Engineering |date=21 April 2013 |publisher=Association for Computing Machinery |isbn=978-1-4503-1636-1 |pages=27–38 |url=https://dl.acm.org/doi/abs/10.1145/2479871.2479879 |chapter=Automated root cause isolation of performance regressions during software development|doi=10.1145/2479871.2479879 |s2cid=2593603 }}</ref> and ranking subsystems based on performance counter deviations.<ref>{{cite book |last1=Malik |first1=Haroon |last2=Adams |first2=Bram |last3=Hassan |first3=Ahmed E. |title=Proceedings of the International Symposium on Software Reliability Engineering |date=November 2010 |pages=201–210 |url=https://ieeexplore.ieee.org/document/5635038 |chapter=Pinpointing the Subsystems Responsible for the Performance Deviations in a Load Test|doi=10.1109/ISSRE.2010.43 |isbn=978-1-4244-9056-1 |s2cid=17306870 }}</ref> Bisection can also be repurposed for performance regressions by considering commits that perform below (or above) a certain baseline value as buggy, and taking either the left or the right side of the commits based on the results of this comparison.


==See also==
==See also==


* [[Software rot]]
* [[Software rot]]
* [[Software aging]]


==References==
==References==
Line 18: Line 51:
{{DEFAULTSORT:Software Regression}}
{{DEFAULTSORT:Software Regression}}
[[Category:Software bugs]]
[[Category:Software bugs]]


{{Software-stub}}

Latest revision as of 03:27, 29 August 2023

A software regression is a type of software bug where a feature that has worked before stops working. This may happen after changes are applied to the software's source code, including the addition of new features and bug fixes.[1] They may also be introduced by changes to the environment in which the software is running, such as system upgrades, system patching or a change to daylight saving time.[2] A software performance regression is a situation where the software still functions correctly, but performs more slowly or uses more memory or resources than before.[3] Various types of software regressions have been identified in practice, including the following:[4]

  • Local – a change introduces a new bug in the changed module or component.
  • Remote – a change in one part of the software breaks functionality in another module or component.
  • Unmasked – a change unmasks an already existing bug that had no effect before the change.

Regressions are often caused by encompassed bug fixes included in software patches. One approach to avoiding this kind of problem is regression testing. A properly designed test plan aims at preventing this possibility before releasing any software.[5] Automated testing and well-written test cases can reduce the likelihood of a regression.

Prevention and detection

[edit]

Techniques have been proposed that try to prevent regressions from being introduced into software at various stages of development, as outlined below.

Prior to release

[edit]

In order to avoid regressions being seen by the end-user after release, developers regularly run regression tests after changes are introduced to the software. These tests can include unit tests to catch local regressions as well as integration tests to catch remote regressions.[6] Regression testing techniques often leverage existing test cases to minimize the effort involved in creating them.[7] However, due to the volume of these existing tests, it is often necessary to select a representative subset, using techniques such as test-case prioritization.

For detecting performance regressions, software performance tests are run on a regular basis, to monitor the response time and resource usage metrics of the software after subsequent changes.[8] Unlike functional regression tests, the results of performance tests are subject to variance - that is, results can differ between tests due to variance in performance measurements; as a result, a decision must be made on whether a change in performance numbers constitutes a regression, based on experience and end-user demands. Approaches such as statistical significance testing and change point detection are sometimes used to aid in this decision.[9]

Prior to commit

[edit]

Since debugging and localizing the root cause of a software regression can be expensive,[10][11] there also exist some methods that try to prevent regressions from being committed into the code repository in the first place. For example, Git Hooks enable developers to run test scripts before code changes are committed or pushed to the code repository.[12] In addition, change impact analysis has been applied to software to predict the impact of a code change on various components of the program, and to supplement test case selection and prioritization.[13][14] Software linters are also often added to commit hooks to ensure consistent coding style, thereby minimizing stylistic issues that can make the software prone to regressions.[15]

Localization

[edit]

Many of the techniques used to find the root cause of non-regression software bugs can also be used to debug software regressions, including breakpoint debugging, print debugging, and program slicing. The techniques described below are often used specifically to debug software regressions.

Functional regressions

[edit]

A common technique used to localize functional regressions is bisection, which takes both a buggy commit and a previously working commit as input, and tries to find the root cause by doing a binary search on the commits in between.[16] Version control systems such as Git and Mercurial provide built-in ways to perform bisection on a given pair of commits.[17][18]

Other options include directly associating the result of a regression test with code changes;[19] setting divergence breakpoints;[20] or using incremental data-flow analysis, which identifies test cases - including failing ones - that are relevant to a set of code changes,[21] among others.

Performance regressions

[edit]

Profiling measures the performance and resource usage of various components of a program, and is used to generate data useful in debugging performance issues. In the context of software performance regressions, developers often compare the call trees (also known as "timelines") generated by profilers for both the buggy version and the previously working version, and mechanisms exist to simplify this comparison.[22] Web development tools typically provide developers the ability to record these performance profiles.[23][24]

Logging also helps with performance regression localization, and similar to call trees, developers can compare systematically-placed performance logs of multiple versions of the same software.[25] A tradeoff exists when adding these performance logs, as adding many logs can help developers pinpoint which portions of the software are regressing at smaller granularities, while adding only a few logs will also reduce overhead when executing the program.[26]

Additional approaches include writing performance-aware unit tests to help with localization,[27] and ranking subsystems based on performance counter deviations.[28] Bisection can also be repurposed for performance regressions by considering commits that perform below (or above) a certain baseline value as buggy, and taking either the left or the right side of the commits based on the results of this comparison.

See also

[edit]

References

[edit]
  1. ^ Wong, W. Eric; Horgan, J.R.; London, Saul; Agrawal, Hira (1997). "A Study of Effective Regression Testing in Practice". Proceedings of the Eighth International Symposium on Software Reliability Engineering (ISSRE 97). IEEE. doi:10.1109/ISSRE.1997.630875. ISBN 0-8186-8120-9. S2CID 2911517.
  2. ^ Yehudai, Amiram; Tyszberowicz, Shmuel; Nir, Dor (2007). Locating Regression Bugs. Haifa Verification Conference. doi:10.1007/978-3-540-77966-7_18. Retrieved 10 March 2018.
  3. ^ Shang, Weiyi; Hassan, Ahmed E.; Nasser, Mohamed; Flora, Parminder (11 December 2014). "Automated Detection of Performance Regressions Using Regression Models on Clustered Performance Counters" (PDF). Archived from the original (PDF) on 13 January 2021. Retrieved 22 December 2019. {{cite journal}}: Cite journal requires |journal= (help)
  4. ^ Henry, Jean-Jacques Pierre (2008). The Testing Network: An Integral Approach to Test Activities in Large Software Projects. Springer Science & Business Media. p. 74. ISBN 978-3540785040.
  5. ^ Richardson, Jared; Gwaltney, William Jr (2006). Ship It! A Practical Guide to Successful Software Projects. Raleigh, NC: The Pragmatic Bookshelf. pp. 32, 193. ISBN 978-0-9745140-4-8.
  6. ^ Leung, Hareton K.N.; White, Lee (November 1990). "A study of integration testing and software regression at the integration level". Proceedings of the International Conference on Software Maintenance. San Diego, CA, USA: IEEE. doi:10.1109/ICSM.1990.131377. ISBN 0-8186-2091-9. S2CID 62583582.
  7. ^ Rothermel, Gregg; Harrold, Mary Jean; Dedhia, Jeinay (2000). "Regression test selection for C++ software". Software Testing, Verification and Reliability. 10 (2): 77–109. doi:10.1002/1099-1689(200006)10:2<77::AID-STVR197>3.0.CO;2-E. ISSN 1099-1689.
  8. ^ Weyuker, E.J.; Vokolos, F.I. (December 2000). "Experience with performance testing of software systems: issues, an approach, and case study". IEEE Transactions on Software Engineering. 26 (12): 1147–1156. doi:10.1109/32.888628. ISSN 1939-3520.
  9. ^ Daly, David; Brown, William; Ingo, Henrik; O'Leary, Jim; Bradford, David (20 April 2020). "The Use of Change Point Detection to Identify Software Performance Regressions in a Continuous Integration System". Proceedings of the International Conference on Performance Engineering. Association for Computing Machinery. pp. 67–75. doi:10.1145/3358960.3375791. ISBN 978-1-4503-6991-6. S2CID 211677818.
  10. ^ Nistor, Adrian; Jiang, Tian; Tan, Lin (May 2013). "Discovering, reporting, and fixing performance bugs". Proceedings of the Working Conference on Mining Software Repositories (MSR). pp. 237–246. doi:10.1109/MSR.2013.6624035. ISBN 978-1-4673-2936-1. S2CID 12773088.
  11. ^ Agarwal, Pragya; Agrawal, Arun Prakash (17 September 2014). "Fault-localization techniques for software systems: a literature review". ACM SIGSOFT Software Engineering Notes. 39 (5): 1–8. doi:10.1145/2659118.2659125. ISSN 0163-5948. S2CID 12101263.
  12. ^ "Git - Git Hooks". git-scm.com. Retrieved 7 November 2021.
  13. ^ Orso, Alessandro; Apiwattanapong, Taweesup; Harrold, Mary Jean (1 September 2003). "Leveraging field data for impact analysis and regression testing". ACM SIGSOFT Software Engineering Notes. 28 (5): 128–137. doi:10.1145/949952.940089. ISSN 0163-5948.
  14. ^ Qu, Xiao; Acharya, Mithun; Robinson, Brian (September 2012). "Configuration selection using code change impact analysis for regression testing". Proceedings of the International Conference on Software Maintenance. pp. 129–138. doi:10.1109/ICSM.2012.6405263. ISBN 978-1-4673-2312-3. S2CID 14928793.
  15. ^ Tómasdóttir, Kristín Fjóla; Aniche, Mauricio; van Deursen, Arie (October 2017). "Why and how JavaScript developers use linters". Proceedings of the International Conference on Automated Software Engineering. pp. 578–589. doi:10.1109/ASE.2017.8115668. ISBN 978-1-5386-2684-9. S2CID 215750004.
  16. ^ Gross, Thomas (10 September 1997). "Bisection Debugging". Proceedings of the International Workshop on Automatic Debugging. Linkøping University Electronic Press. pp. 185–191.
  17. ^ "Git - git-bisect Documentation". git-scm.com. Retrieved 7 November 2021.
  18. ^ "hg - bisect". www.selenic.com. Mercurial. Retrieved 7 November 2021.
  19. ^ "Reading 11: Debugging". web.mit.edu. MIT.
  20. ^ Buhse, Ben; Wei, Thomas; Zang, Zhiqiang; Milicevic, Aleksandar; Gligoric, Milos (May 2019). "VeDebug: Regression Debugging Tool for Java". Proceedings of the International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). pp. 15–18. doi:10.1109/ICSE-Companion.2019.00027. ISBN 978-1-7281-1764-5. S2CID 174799830.
  21. ^ Taha, A.-B.; Thebaut, S.M.; Liu, S.-S. (September 1989). "An approach to software fault localization and revalidation based on incremental data flow analysis". Proceedings of the Annual International Computer Software & Applications Conference. IEEE. pp. 527–534. doi:10.1109/CMPSAC.1989.65142. ISBN 0-8186-1964-3. S2CID 41978046.
  22. ^ Ocariza, Frolin S.; Zhao, Boyang (2021). "Localizing software performance regressions in web applications by comparing execution timelines". Software Testing, Verification and Reliability. 31 (5): e1750. doi:10.1002/stvr.1750. ISSN 1099-1689. S2CID 225416138.
  23. ^ "Analyze runtime performance". Chrome Developers. Google. Retrieved 7 November 2021.
  24. ^ "Performance analysis reference - Microsoft Edge Development". docs.microsoft.com. Microsoft. Retrieved 7 November 2021.
  25. ^ Yao, Kundi; B. de Pádua, Guilherme; Shang, Weiyi; Sporea, Steve; Toma, Andrei; Sajedi, Sarah (30 March 2018). "Log4Perf: Suggesting Logging Locations for Web-based Systems' Performance Monitoring". Proceedings of the International Conference on Performance Engineering. Association for Computing Machinery. pp. 127–138. doi:10.1145/3184407.3184416. ISBN 978-1-4503-5095-2. S2CID 4557038.
  26. ^ Li, Heng; Shang, Weiyi; Adams, Bram; Sayagh, Mohammed; Hassan, Ahmed E. (30 January 2020). "A Qualitative Study of the Benefits and Costs of Logging from Developers' Perspectives". IEEE Transactions on Software Engineering. 47 (12): 2858–2873. doi:10.1109/TSE.2020.2970422. S2CID 213679706.
  27. ^ Heger, Christoph; Happe, Jens; Farahbod, Roozbeh (21 April 2013). "Automated root cause isolation of performance regressions during software development". Proceedings of the International Conference on Performance Engineering. Association for Computing Machinery. pp. 27–38. doi:10.1145/2479871.2479879. ISBN 978-1-4503-1636-1. S2CID 2593603.
  28. ^ Malik, Haroon; Adams, Bram; Hassan, Ahmed E. (November 2010). "Pinpointing the Subsystems Responsible for the Performance Deviations in a Load Test". Proceedings of the International Symposium on Software Reliability Engineering. pp. 201–210. doi:10.1109/ISSRE.2010.43. ISBN 978-1-4244-9056-1. S2CID 17306870.