Is it possible to test jest tests?



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








2















I want to make a tool that test different exercises. One exercise is unit-testing. Therefore I need to test whether the tests that are made by the student are good tests. So for example the student has the following code:



export class HelloWorld 
public static showHello(): string
return 'Hello World!';




With the following jest test:



import HelloWorld from '..';

describe(Hello World exercise, () =>
test('Check function is defined', () =>
expect(HelloWorld.showHello()).toBeDefined();
);

test('Empty input results in Hello World!', () =>
expect(HelloWorld.showHello()).toBe('Hello World!');
);
);


How can I test that the student did indeed test these two tests? I thought about



export const firstTest = test()...


And then test whether firstTest test the right thing. But the disadvantage is that you have to export every test for this solution.










share|improve this question



















  • 2





    Typically, you would do this 1 of 2 ways. First, define a public interface your students must implement and write unit tests to that public interface. Then you can just run your tests against their code. If it works, they wrote good code and good tests. This method gives you some confidence in the quality of the tests. Second is by mere visual inspection. You can augment visual inspection with a code coverage tool. I don't know what jest uses, but istanbul is pretty popular in JS in general.

    – c1moore
    Nov 15 '18 at 13:23












  • do you provide code that should be covered with tests? or is it provided by students also?

    – skyboyer
    Nov 15 '18 at 19:06











  • Thanks @c1moore, this will help by inspecting there code. But that will not test there tests right? They can for example test 'true'.toBe('true').

    – user2531964
    Nov 16 '18 at 8:59











  • And @skyboyer, the students should indeed write both, the code and the test. So in above example, the student should also have written the HelloWorld class

    – user2531964
    Nov 16 '18 at 9:00











  • I believe it is not possible to validate tests written for custom code. why? because otherwise it would be part of software development process. But currently we are able just to calculate coverage for tests and that's all. But if task is changed to "write custom tests for predefined code" then you would be able to prepare some "intentionally broken" code that should lead good tests to fail. So far you could validate tests in automatic way.

    – skyboyer
    Nov 16 '18 at 9:19

















2















I want to make a tool that test different exercises. One exercise is unit-testing. Therefore I need to test whether the tests that are made by the student are good tests. So for example the student has the following code:



export class HelloWorld 
public static showHello(): string
return 'Hello World!';




With the following jest test:



import HelloWorld from '..';

describe(Hello World exercise, () =>
test('Check function is defined', () =>
expect(HelloWorld.showHello()).toBeDefined();
);

test('Empty input results in Hello World!', () =>
expect(HelloWorld.showHello()).toBe('Hello World!');
);
);


How can I test that the student did indeed test these two tests? I thought about



export const firstTest = test()...


And then test whether firstTest test the right thing. But the disadvantage is that you have to export every test for this solution.










share|improve this question



















  • 2





    Typically, you would do this 1 of 2 ways. First, define a public interface your students must implement and write unit tests to that public interface. Then you can just run your tests against their code. If it works, they wrote good code and good tests. This method gives you some confidence in the quality of the tests. Second is by mere visual inspection. You can augment visual inspection with a code coverage tool. I don't know what jest uses, but istanbul is pretty popular in JS in general.

    – c1moore
    Nov 15 '18 at 13:23












  • do you provide code that should be covered with tests? or is it provided by students also?

    – skyboyer
    Nov 15 '18 at 19:06











  • Thanks @c1moore, this will help by inspecting there code. But that will not test there tests right? They can for example test 'true'.toBe('true').

    – user2531964
    Nov 16 '18 at 8:59











  • And @skyboyer, the students should indeed write both, the code and the test. So in above example, the student should also have written the HelloWorld class

    – user2531964
    Nov 16 '18 at 9:00











  • I believe it is not possible to validate tests written for custom code. why? because otherwise it would be part of software development process. But currently we are able just to calculate coverage for tests and that's all. But if task is changed to "write custom tests for predefined code" then you would be able to prepare some "intentionally broken" code that should lead good tests to fail. So far you could validate tests in automatic way.

    – skyboyer
    Nov 16 '18 at 9:19













2












2








2


1






I want to make a tool that test different exercises. One exercise is unit-testing. Therefore I need to test whether the tests that are made by the student are good tests. So for example the student has the following code:



export class HelloWorld 
public static showHello(): string
return 'Hello World!';




With the following jest test:



import HelloWorld from '..';

describe(Hello World exercise, () =>
test('Check function is defined', () =>
expect(HelloWorld.showHello()).toBeDefined();
);

test('Empty input results in Hello World!', () =>
expect(HelloWorld.showHello()).toBe('Hello World!');
);
);


How can I test that the student did indeed test these two tests? I thought about



export const firstTest = test()...


And then test whether firstTest test the right thing. But the disadvantage is that you have to export every test for this solution.










share|improve this question
















I want to make a tool that test different exercises. One exercise is unit-testing. Therefore I need to test whether the tests that are made by the student are good tests. So for example the student has the following code:



export class HelloWorld 
public static showHello(): string
return 'Hello World!';




With the following jest test:



import HelloWorld from '..';

describe(Hello World exercise, () =>
test('Check function is defined', () =>
expect(HelloWorld.showHello()).toBeDefined();
);

test('Empty input results in Hello World!', () =>
expect(HelloWorld.showHello()).toBe('Hello World!');
);
);


How can I test that the student did indeed test these two tests? I thought about



export const firstTest = test()...


And then test whether firstTest test the right thing. But the disadvantage is that you have to export every test for this solution.







node.js typescript testing jestjs






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 15 '18 at 17:01









LazerBass

1,53531125




1,53531125










asked Nov 15 '18 at 13:18









user2531964user2531964

335




335







  • 2





    Typically, you would do this 1 of 2 ways. First, define a public interface your students must implement and write unit tests to that public interface. Then you can just run your tests against their code. If it works, they wrote good code and good tests. This method gives you some confidence in the quality of the tests. Second is by mere visual inspection. You can augment visual inspection with a code coverage tool. I don't know what jest uses, but istanbul is pretty popular in JS in general.

    – c1moore
    Nov 15 '18 at 13:23












  • do you provide code that should be covered with tests? or is it provided by students also?

    – skyboyer
    Nov 15 '18 at 19:06











  • Thanks @c1moore, this will help by inspecting there code. But that will not test there tests right? They can for example test 'true'.toBe('true').

    – user2531964
    Nov 16 '18 at 8:59











  • And @skyboyer, the students should indeed write both, the code and the test. So in above example, the student should also have written the HelloWorld class

    – user2531964
    Nov 16 '18 at 9:00











  • I believe it is not possible to validate tests written for custom code. why? because otherwise it would be part of software development process. But currently we are able just to calculate coverage for tests and that's all. But if task is changed to "write custom tests for predefined code" then you would be able to prepare some "intentionally broken" code that should lead good tests to fail. So far you could validate tests in automatic way.

    – skyboyer
    Nov 16 '18 at 9:19












  • 2





    Typically, you would do this 1 of 2 ways. First, define a public interface your students must implement and write unit tests to that public interface. Then you can just run your tests against their code. If it works, they wrote good code and good tests. This method gives you some confidence in the quality of the tests. Second is by mere visual inspection. You can augment visual inspection with a code coverage tool. I don't know what jest uses, but istanbul is pretty popular in JS in general.

    – c1moore
    Nov 15 '18 at 13:23












  • do you provide code that should be covered with tests? or is it provided by students also?

    – skyboyer
    Nov 15 '18 at 19:06











  • Thanks @c1moore, this will help by inspecting there code. But that will not test there tests right? They can for example test 'true'.toBe('true').

    – user2531964
    Nov 16 '18 at 8:59











  • And @skyboyer, the students should indeed write both, the code and the test. So in above example, the student should also have written the HelloWorld class

    – user2531964
    Nov 16 '18 at 9:00











  • I believe it is not possible to validate tests written for custom code. why? because otherwise it would be part of software development process. But currently we are able just to calculate coverage for tests and that's all. But if task is changed to "write custom tests for predefined code" then you would be able to prepare some "intentionally broken" code that should lead good tests to fail. So far you could validate tests in automatic way.

    – skyboyer
    Nov 16 '18 at 9:19







2




2





Typically, you would do this 1 of 2 ways. First, define a public interface your students must implement and write unit tests to that public interface. Then you can just run your tests against their code. If it works, they wrote good code and good tests. This method gives you some confidence in the quality of the tests. Second is by mere visual inspection. You can augment visual inspection with a code coverage tool. I don't know what jest uses, but istanbul is pretty popular in JS in general.

– c1moore
Nov 15 '18 at 13:23






Typically, you would do this 1 of 2 ways. First, define a public interface your students must implement and write unit tests to that public interface. Then you can just run your tests against their code. If it works, they wrote good code and good tests. This method gives you some confidence in the quality of the tests. Second is by mere visual inspection. You can augment visual inspection with a code coverage tool. I don't know what jest uses, but istanbul is pretty popular in JS in general.

– c1moore
Nov 15 '18 at 13:23














do you provide code that should be covered with tests? or is it provided by students also?

– skyboyer
Nov 15 '18 at 19:06





do you provide code that should be covered with tests? or is it provided by students also?

– skyboyer
Nov 15 '18 at 19:06













Thanks @c1moore, this will help by inspecting there code. But that will not test there tests right? They can for example test 'true'.toBe('true').

– user2531964
Nov 16 '18 at 8:59





Thanks @c1moore, this will help by inspecting there code. But that will not test there tests right? They can for example test 'true'.toBe('true').

– user2531964
Nov 16 '18 at 8:59













And @skyboyer, the students should indeed write both, the code and the test. So in above example, the student should also have written the HelloWorld class

– user2531964
Nov 16 '18 at 9:00





And @skyboyer, the students should indeed write both, the code and the test. So in above example, the student should also have written the HelloWorld class

– user2531964
Nov 16 '18 at 9:00













I believe it is not possible to validate tests written for custom code. why? because otherwise it would be part of software development process. But currently we are able just to calculate coverage for tests and that's all. But if task is changed to "write custom tests for predefined code" then you would be able to prepare some "intentionally broken" code that should lead good tests to fail. So far you could validate tests in automatic way.

– skyboyer
Nov 16 '18 at 9:19





I believe it is not possible to validate tests written for custom code. why? because otherwise it would be part of software development process. But currently we are able just to calculate coverage for tests and that's all. But if task is changed to "write custom tests for predefined code" then you would be able to prepare some "intentionally broken" code that should lead good tests to fail. So far you could validate tests in automatic way.

– skyboyer
Nov 16 '18 at 9:19












1 Answer
1






active

oldest

votes


















0














Testing tests requires black box testing, which means you either need to know which tests are written before hand (and how they should be written) or you need review each test and write unique tests for each one. Given you are trying to test students' understanding of tests and do this in an automated way, I doubt either option is plausible.



However, there are several heuristics you can use. I suggest using a combination of the first 3 of these methods.



  1. Define a public interface your students must implement and write tests to this public interface. This doesn't directly test your students' tests, but if any of your tests fail, they did not write sufficient (quality) tests. These tests should be easy and obvious for your students should focus on.

  2. Use a code coverage tool such as istanbul. This doesn't test the quality of the tests directly, but it does give you confidence that the tests actually touch the code.

  3. This method is inspired by (not an implementation of) a technique I recently discovered called "Adversarial Gamedays". For this method, you would random break the student's code in several places, let's just say 10, and let the code run. Chances are the tests won't catch all of the bugs you made, but they should catch x%, where x is the acceptable code coverage you determined. The bugs should be sufficiently spread across the program so as not to take advantage of a single flaw. This tests both quantity and quality.

  4. Visual inspection and/or manual testing. This method is error-prone and time consuming. If you don't have TAs, it's probably not a viable option. However, if you spend sufficient time on this method, it is the most thorough and least biased.

As I stated, a combination of the first 3 options would be optimal. For example, provide a public interface for the system, run code coverage tools, and randomly break y areas of each student's private implementation.






share|improve this answer























    Your Answer






    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "1"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53320397%2fis-it-possible-to-test-jest-tests%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    Testing tests requires black box testing, which means you either need to know which tests are written before hand (and how they should be written) or you need review each test and write unique tests for each one. Given you are trying to test students' understanding of tests and do this in an automated way, I doubt either option is plausible.



    However, there are several heuristics you can use. I suggest using a combination of the first 3 of these methods.



    1. Define a public interface your students must implement and write tests to this public interface. This doesn't directly test your students' tests, but if any of your tests fail, they did not write sufficient (quality) tests. These tests should be easy and obvious for your students should focus on.

    2. Use a code coverage tool such as istanbul. This doesn't test the quality of the tests directly, but it does give you confidence that the tests actually touch the code.

    3. This method is inspired by (not an implementation of) a technique I recently discovered called "Adversarial Gamedays". For this method, you would random break the student's code in several places, let's just say 10, and let the code run. Chances are the tests won't catch all of the bugs you made, but they should catch x%, where x is the acceptable code coverage you determined. The bugs should be sufficiently spread across the program so as not to take advantage of a single flaw. This tests both quantity and quality.

    4. Visual inspection and/or manual testing. This method is error-prone and time consuming. If you don't have TAs, it's probably not a viable option. However, if you spend sufficient time on this method, it is the most thorough and least biased.

    As I stated, a combination of the first 3 options would be optimal. For example, provide a public interface for the system, run code coverage tools, and randomly break y areas of each student's private implementation.






    share|improve this answer



























      0














      Testing tests requires black box testing, which means you either need to know which tests are written before hand (and how they should be written) or you need review each test and write unique tests for each one. Given you are trying to test students' understanding of tests and do this in an automated way, I doubt either option is plausible.



      However, there are several heuristics you can use. I suggest using a combination of the first 3 of these methods.



      1. Define a public interface your students must implement and write tests to this public interface. This doesn't directly test your students' tests, but if any of your tests fail, they did not write sufficient (quality) tests. These tests should be easy and obvious for your students should focus on.

      2. Use a code coverage tool such as istanbul. This doesn't test the quality of the tests directly, but it does give you confidence that the tests actually touch the code.

      3. This method is inspired by (not an implementation of) a technique I recently discovered called "Adversarial Gamedays". For this method, you would random break the student's code in several places, let's just say 10, and let the code run. Chances are the tests won't catch all of the bugs you made, but they should catch x%, where x is the acceptable code coverage you determined. The bugs should be sufficiently spread across the program so as not to take advantage of a single flaw. This tests both quantity and quality.

      4. Visual inspection and/or manual testing. This method is error-prone and time consuming. If you don't have TAs, it's probably not a viable option. However, if you spend sufficient time on this method, it is the most thorough and least biased.

      As I stated, a combination of the first 3 options would be optimal. For example, provide a public interface for the system, run code coverage tools, and randomly break y areas of each student's private implementation.






      share|improve this answer

























        0












        0








        0







        Testing tests requires black box testing, which means you either need to know which tests are written before hand (and how they should be written) or you need review each test and write unique tests for each one. Given you are trying to test students' understanding of tests and do this in an automated way, I doubt either option is plausible.



        However, there are several heuristics you can use. I suggest using a combination of the first 3 of these methods.



        1. Define a public interface your students must implement and write tests to this public interface. This doesn't directly test your students' tests, but if any of your tests fail, they did not write sufficient (quality) tests. These tests should be easy and obvious for your students should focus on.

        2. Use a code coverage tool such as istanbul. This doesn't test the quality of the tests directly, but it does give you confidence that the tests actually touch the code.

        3. This method is inspired by (not an implementation of) a technique I recently discovered called "Adversarial Gamedays". For this method, you would random break the student's code in several places, let's just say 10, and let the code run. Chances are the tests won't catch all of the bugs you made, but they should catch x%, where x is the acceptable code coverage you determined. The bugs should be sufficiently spread across the program so as not to take advantage of a single flaw. This tests both quantity and quality.

        4. Visual inspection and/or manual testing. This method is error-prone and time consuming. If you don't have TAs, it's probably not a viable option. However, if you spend sufficient time on this method, it is the most thorough and least biased.

        As I stated, a combination of the first 3 options would be optimal. For example, provide a public interface for the system, run code coverage tools, and randomly break y areas of each student's private implementation.






        share|improve this answer













        Testing tests requires black box testing, which means you either need to know which tests are written before hand (and how they should be written) or you need review each test and write unique tests for each one. Given you are trying to test students' understanding of tests and do this in an automated way, I doubt either option is plausible.



        However, there are several heuristics you can use. I suggest using a combination of the first 3 of these methods.



        1. Define a public interface your students must implement and write tests to this public interface. This doesn't directly test your students' tests, but if any of your tests fail, they did not write sufficient (quality) tests. These tests should be easy and obvious for your students should focus on.

        2. Use a code coverage tool such as istanbul. This doesn't test the quality of the tests directly, but it does give you confidence that the tests actually touch the code.

        3. This method is inspired by (not an implementation of) a technique I recently discovered called "Adversarial Gamedays". For this method, you would random break the student's code in several places, let's just say 10, and let the code run. Chances are the tests won't catch all of the bugs you made, but they should catch x%, where x is the acceptable code coverage you determined. The bugs should be sufficiently spread across the program so as not to take advantage of a single flaw. This tests both quantity and quality.

        4. Visual inspection and/or manual testing. This method is error-prone and time consuming. If you don't have TAs, it's probably not a viable option. However, if you spend sufficient time on this method, it is the most thorough and least biased.

        As I stated, a combination of the first 3 options would be optimal. For example, provide a public interface for the system, run code coverage tools, and randomly break y areas of each student's private implementation.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 23 '18 at 2:03









        c1moorec1moore

        880717




        880717





























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53320397%2fis-it-possible-to-test-jest-tests%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            How to how show current date and time by default on contact form 7 in WordPress without taking input from user in datetimepicker

            Syphilis

            Darth Vader #20