I've created a small library that allows to create mock functions. You can record the calls to the function and check arguments with it. It happened to me multiple times that somebody has changed the function that was mocked and it didn't comply with the values returned by mock or the function was called differenty in the code.
I recently added a support for instrumentation of mock functions with Malli schemas. If you declare a mock for a function that has Malli function schema, it will instrument the mock. This allows to discover mocks returning invalid values or accepting invalid arguments or the number of arguments.
Here's an example:
(require '[malli.core :as m])
(m/=> my-inc [:=> [:cat :int] :int])
(defn my-inc [x]
(inc x))
;; You have to provide a symbol of function
;; or with or without namespace (aliases are supported).
(def my-inc-mock (mock-fn 'my-inc 1))
(my-inc-mock 0)
;=> 1
(my-inc-mock "foo")
;=> An exception ::invalid-input
(def my-inc-mock2 (mock-fn 'my-inc nil))
(my-inc-mock2 1)
;=> An exception ::invalid-output
This instrumentation works also with a macro for defining mocks.
(with-mocks [my-inc 2]
(my-inc "foo"))
;=> An exception ::invalid-input
(with-mocks [my-inc nil]
(my-inc 1))
;=> An exception ::invalid-output
I wonder if they are sure that similar exercises weren’t in the learning set for AI. Such competitions have usually kind of patterns for exercises and people usually learn to them by resolving a large number of exercises to catch the pattern.