Zum Hauptinhalt springen

IdenTrust offers this webpage to inform stakeholders of events impacting to IGC CA services

 

August 27, 2024

IdenTrust CRL Failure

 

1. Which CA Components were affected by the incident.

  • IdenTrust Global Common CRL responders

 

2. The CA’s Interpretation of the incident

  • IdenTrust generate the IGC CA1 CRL every 12 hours.  On August 27th, 2024, the IGC CA1 CRL generation scheduled for 05:12:31 did not occur as expected due to the processing time exceeded the configured processing time on the server. Further it was discovered that there was a mis configuration in automated email alert for the CRL generation failure which prevented timely alert.
  • While discovery was at the 24 hour mark, it took another 2 hours to triage and implement a fix to successfully generate the IGC CA1 CRL.  This 26 hour gap in between CRL generation violates the IGC CPS section 4.9.7 which mandates that the maximum time allowed to generate an online CRL is to be 24 hrs.

 

3. Who was impacted by the incident.

  • Users of all IGC services, including issuance and validations were impacted.

 

4. When was the incident discovered.

  • August 27, 2024 17:12:31

 

5. A complete list of all certificates that were either issued erroneously or not compliant with the CP/CPS as a result of the incident

  • None

 

6. A statement that the incident has been fully Remediated.

  • Resolution Time – August 27, 2024 19:13:04 MDT
  • Resolution statement – The Root cause of the issue was identified as described above and was remediated; all systems returned to normal operations


Friday, July 1, 2022

IdenTrust Web Services Failure

1. Which CA components were affected by the incident

  • IdenTrust.com, IdenTrust Global Common Load Balancer, OCSP, and CRL responders  

2. The CA's interpretation of the incident

  • An unusually high traffic volume to IdenTrust.com caused an issue with Load Balancers. It was determined that the Load Balancers went into a state where there was a low amount of available system memory due to the unusual increase in traffic. The impact of that traffic caused Load Balancers to be saturated and resulted in only a small percentage of traffic to pass thru to the network.  This caused intermittent results with issuance, validation and reaching the IdenTrust.com website.

3. Who was impacted by the incident

  • Users of all IGC services, including issuance, validations, IdenTrust.com, and  OSCP Traffic seeking CRL were impacted by this slowdown. 

4. When the incident was discovered

  • Identified Failure Time – July 1, 2022 11:03 MST

5. A complete list of all certificates that were either issued erroneously or not compliant with the CP/CPS as a result of the incident

  • None

6. A statement that the incident has been fully remediated

  • Resolution Time - July 1, 2022 21:30 MST
  • Resolution statement - The root cause of the issue was identified as described above and was remediated; all systems returned to normal operation.

Friday, February 4, 2022

IdenTrust Web Services Failure

1. Which CA components were affected by the incident

  • IdenTrust Global Common OCSP and CRL responders 

2. The CA's interpretation of the incident

  • An IGC OCSP responder certificate had been renewed prior to expiration, but the system did not recognize the renewed certificate and therefore did not respond to OCSP queries after the old certificate expired. The resulting backup of unanswered queries eventually affected all services to IGC users.

3. Who was impacted by the incident

  • Users of all IGC services, including issuance and validations, were impacted

4. When the incident was discovered

  • Identified Failure Time - February 4, 2022 12:01 MST

5. A complete list of all certificates that were either issued erroneously or not compliant with the CP/CPS as a result of the incident

  • None

6. A statement that the incident has been fully remediated

  • Resolution Time - February 04, 2022 16:36 MST
  • Resolution statement - The root cause of the issue was identified as described above and was remediated; intermediate issues were also identified and remediated, and all systems returned to normal operation.