Azure DevOps PublishTestResults task — how to publish results if tests fail










1















I am running a pytest-based suite of tests during my Azure DevOps build process. I have two jobs arranged to run these tests against two different environments.



In each job, I run the pytest tests using a script task and generate a junit-style xml output file, then have a PublishTestResults task publish that xml file. This is working great, and I'm able to peruse my test results in the azure build tests report UI -- but only if all the tests pass. If any tests fail, the publish task is skipped, and the tests aren't reported in the UI.



YML extract:



- job: 'RunTestsQA'
continueOnError: True
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.6'
architecture: 'x64'
- task: DownloadSecureFile@1
inputs:
secureFile: 'ConfigFile'
- script: pip install -r requirements.txt
displayName: 'Install Requirements'
- script: |
pytest -m smoke --ENV=qa --log-file $SYSTEM_ARTIFACTSDIRECTORY/smoke-qa.log --junitxml="TEST-qa-smoke.xml"
displayName: 'Test with pytest'
# PUBLISH JUNIT RESULTS
- task: PublishTestResults@2
inputs:
condition: succeededOrFailed()
testResultsFormat: 'JUnit' # Options: JUnit, NUnit, VSTest, xUnit
testResultsFiles: '**/TEST-*.xml'
#searchFolder: '$(System.DefaultWorkingDirectory)' # Optional
mergeTestResults: false # Optional
testRunTitle: 'API_CHECK QA'
#buildPlatform: # Optional
#buildConfiguration: # Optional
publishRunAttachments: true # Optional


Through some experimentation, I've been able to confirm the XML file is always created. What do I need to fix here? A test report isn't super helpful if it only shows up when the tests pass.










share|improve this question


























    1















    I am running a pytest-based suite of tests during my Azure DevOps build process. I have two jobs arranged to run these tests against two different environments.



    In each job, I run the pytest tests using a script task and generate a junit-style xml output file, then have a PublishTestResults task publish that xml file. This is working great, and I'm able to peruse my test results in the azure build tests report UI -- but only if all the tests pass. If any tests fail, the publish task is skipped, and the tests aren't reported in the UI.



    YML extract:



    - job: 'RunTestsQA'
    continueOnError: True
    steps:
    - task: UsePythonVersion@0
    inputs:
    versionSpec: '3.6'
    architecture: 'x64'
    - task: DownloadSecureFile@1
    inputs:
    secureFile: 'ConfigFile'
    - script: pip install -r requirements.txt
    displayName: 'Install Requirements'
    - script: |
    pytest -m smoke --ENV=qa --log-file $SYSTEM_ARTIFACTSDIRECTORY/smoke-qa.log --junitxml="TEST-qa-smoke.xml"
    displayName: 'Test with pytest'
    # PUBLISH JUNIT RESULTS
    - task: PublishTestResults@2
    inputs:
    condition: succeededOrFailed()
    testResultsFormat: 'JUnit' # Options: JUnit, NUnit, VSTest, xUnit
    testResultsFiles: '**/TEST-*.xml'
    #searchFolder: '$(System.DefaultWorkingDirectory)' # Optional
    mergeTestResults: false # Optional
    testRunTitle: 'API_CHECK QA'
    #buildPlatform: # Optional
    #buildConfiguration: # Optional
    publishRunAttachments: true # Optional


    Through some experimentation, I've been able to confirm the XML file is always created. What do I need to fix here? A test report isn't super helpful if it only shows up when the tests pass.










    share|improve this question
























      1












      1








      1








      I am running a pytest-based suite of tests during my Azure DevOps build process. I have two jobs arranged to run these tests against two different environments.



      In each job, I run the pytest tests using a script task and generate a junit-style xml output file, then have a PublishTestResults task publish that xml file. This is working great, and I'm able to peruse my test results in the azure build tests report UI -- but only if all the tests pass. If any tests fail, the publish task is skipped, and the tests aren't reported in the UI.



      YML extract:



      - job: 'RunTestsQA'
      continueOnError: True
      steps:
      - task: UsePythonVersion@0
      inputs:
      versionSpec: '3.6'
      architecture: 'x64'
      - task: DownloadSecureFile@1
      inputs:
      secureFile: 'ConfigFile'
      - script: pip install -r requirements.txt
      displayName: 'Install Requirements'
      - script: |
      pytest -m smoke --ENV=qa --log-file $SYSTEM_ARTIFACTSDIRECTORY/smoke-qa.log --junitxml="TEST-qa-smoke.xml"
      displayName: 'Test with pytest'
      # PUBLISH JUNIT RESULTS
      - task: PublishTestResults@2
      inputs:
      condition: succeededOrFailed()
      testResultsFormat: 'JUnit' # Options: JUnit, NUnit, VSTest, xUnit
      testResultsFiles: '**/TEST-*.xml'
      #searchFolder: '$(System.DefaultWorkingDirectory)' # Optional
      mergeTestResults: false # Optional
      testRunTitle: 'API_CHECK QA'
      #buildPlatform: # Optional
      #buildConfiguration: # Optional
      publishRunAttachments: true # Optional


      Through some experimentation, I've been able to confirm the XML file is always created. What do I need to fix here? A test report isn't super helpful if it only shows up when the tests pass.










      share|improve this question














      I am running a pytest-based suite of tests during my Azure DevOps build process. I have two jobs arranged to run these tests against two different environments.



      In each job, I run the pytest tests using a script task and generate a junit-style xml output file, then have a PublishTestResults task publish that xml file. This is working great, and I'm able to peruse my test results in the azure build tests report UI -- but only if all the tests pass. If any tests fail, the publish task is skipped, and the tests aren't reported in the UI.



      YML extract:



      - job: 'RunTestsQA'
      continueOnError: True
      steps:
      - task: UsePythonVersion@0
      inputs:
      versionSpec: '3.6'
      architecture: 'x64'
      - task: DownloadSecureFile@1
      inputs:
      secureFile: 'ConfigFile'
      - script: pip install -r requirements.txt
      displayName: 'Install Requirements'
      - script: |
      pytest -m smoke --ENV=qa --log-file $SYSTEM_ARTIFACTSDIRECTORY/smoke-qa.log --junitxml="TEST-qa-smoke.xml"
      displayName: 'Test with pytest'
      # PUBLISH JUNIT RESULTS
      - task: PublishTestResults@2
      inputs:
      condition: succeededOrFailed()
      testResultsFormat: 'JUnit' # Options: JUnit, NUnit, VSTest, xUnit
      testResultsFiles: '**/TEST-*.xml'
      #searchFolder: '$(System.DefaultWorkingDirectory)' # Optional
      mergeTestResults: false # Optional
      testRunTitle: 'API_CHECK QA'
      #buildPlatform: # Optional
      #buildConfiguration: # Optional
      publishRunAttachments: true # Optional


      Through some experimentation, I've been able to confirm the XML file is always created. What do I need to fix here? A test report isn't super helpful if it only shows up when the tests pass.







      azure azure-devops pytest






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 8 '18 at 17:56









      klreeherklreeher

      1491111




      1491111






















          1 Answer
          1






          active

          oldest

          votes


















          0














          I'm using Ruby and Minitest, but I have found that the following setting allows the PublishTestResults task to run:



          - script: |
          pytest -m smoke --ENV=qa --log-file $SYSTEM_ARTIFACTSDIRECTORY/smoke-qa.log --junitxml="TEST-qa-smoke.xml"
          displayName: 'Test with pytest'
          continueOnError: true


          The only issue that I have found with this setting is that if the build fails, it reports as "Partially Succeeded" and not "Failed".



          edit:



          Of course, if your build process has any deploy tasks after the test task, you may not want to use this setting.






          share|improve this answer
























            Your Answer






            StackExchange.ifUsing("editor", function ()
            StackExchange.using("externalEditor", function ()
            StackExchange.using("snippets", function ()
            StackExchange.snippets.init();
            );
            );
            , "code-snippets");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "1"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53213554%2fazure-devops-publishtestresults-task-how-to-publish-results-if-tests-fail%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            I'm using Ruby and Minitest, but I have found that the following setting allows the PublishTestResults task to run:



            - script: |
            pytest -m smoke --ENV=qa --log-file $SYSTEM_ARTIFACTSDIRECTORY/smoke-qa.log --junitxml="TEST-qa-smoke.xml"
            displayName: 'Test with pytest'
            continueOnError: true


            The only issue that I have found with this setting is that if the build fails, it reports as "Partially Succeeded" and not "Failed".



            edit:



            Of course, if your build process has any deploy tasks after the test task, you may not want to use this setting.






            share|improve this answer





























              0














              I'm using Ruby and Minitest, but I have found that the following setting allows the PublishTestResults task to run:



              - script: |
              pytest -m smoke --ENV=qa --log-file $SYSTEM_ARTIFACTSDIRECTORY/smoke-qa.log --junitxml="TEST-qa-smoke.xml"
              displayName: 'Test with pytest'
              continueOnError: true


              The only issue that I have found with this setting is that if the build fails, it reports as "Partially Succeeded" and not "Failed".



              edit:



              Of course, if your build process has any deploy tasks after the test task, you may not want to use this setting.






              share|improve this answer



























                0












                0








                0







                I'm using Ruby and Minitest, but I have found that the following setting allows the PublishTestResults task to run:



                - script: |
                pytest -m smoke --ENV=qa --log-file $SYSTEM_ARTIFACTSDIRECTORY/smoke-qa.log --junitxml="TEST-qa-smoke.xml"
                displayName: 'Test with pytest'
                continueOnError: true


                The only issue that I have found with this setting is that if the build fails, it reports as "Partially Succeeded" and not "Failed".



                edit:



                Of course, if your build process has any deploy tasks after the test task, you may not want to use this setting.






                share|improve this answer















                I'm using Ruby and Minitest, but I have found that the following setting allows the PublishTestResults task to run:



                - script: |
                pytest -m smoke --ENV=qa --log-file $SYSTEM_ARTIFACTSDIRECTORY/smoke-qa.log --junitxml="TEST-qa-smoke.xml"
                displayName: 'Test with pytest'
                continueOnError: true


                The only issue that I have found with this setting is that if the build fails, it reports as "Partially Succeeded" and not "Failed".



                edit:



                Of course, if your build process has any deploy tasks after the test task, you may not want to use this setting.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Nov 14 '18 at 5:43

























                answered Nov 14 '18 at 4:53









                Jordan BrockJordan Brock

                1414




                1414





























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53213554%2fazure-devops-publishtestresults-task-how-to-publish-results-if-tests-fail%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    How to how show current date and time by default on contact form 7 in WordPress without taking input from user in datetimepicker

                    Syphilis

                    Darth Vader #20