While other answers make good points, I have to wonder why you are using recursion. This is such a simple problem to solve with a `for`

loop.

I assume that you are not supposed to start from any index other than index 0, so consider the following routine:

```
public int searchArray(int[] arr, int elem) {
for (int i = 0; i < arr.length; ) {
if (arr[i] == elem) {
return i;
}
i += Math.abs(elem - arr[i]);
}
return -1;
}
```

(If you need to start the search part-way through the array then you can add the `offset`

input parameter again and start `i`

from that).

The bottom line is that recursion is overkill, this system is $O(n)$, but the cost of each cycle is less than the same thing using recursion.

I am not aware of any way to solve the given problem with a system of better than $O(n)$ complexity.

**Discussion on complexity – why this is $O(n)$**

This answer has generated a lot of discussion about complexity, that this method only ever scans, at most, half the members of the input array, and thus it should be complexity $O left( frac{n}{2} right )$ instead of $O(n)$. The argument given is something like:

Consider the worst-case data

`1,2,1,2,1,2,1,2,1,2,1,2`

and the search term`3`

. For this situation the method will start at`data[0]`

, and then skip to`data[2]`

, then`data[4]`

, and so on. It will never inspect`data[1]`

, and other odd-indexed data points. If the search term is even more ‘different’ than the actual data (e.g.`100`

) then the method will do only one comparison at`data[0]`

and will then return ‘not found’`-1`

.

This is an interesting observation, that the method only ever needs to scan half the data **at most**. This is especially interesting considering a ‘naive’ method which just scans the data one-member-at-a-time and returns when it finds the value. That ‘naive’ method most certainly has $ Oleft(nright) $ ‘performance’ and complexity, and the ‘skip-method’ will be more than twice as fast.

The important thing to note though, is how the algorithms scale **relative to the amount of data**, not relative to each other!

So, consider a hypothetical set of worst-case data `1,2,1,2,1,2,....`

and the search-term `3`

. This hypothetical happens to be searched in 4 milliseconds by the skip-method, and in 8 milliseconds by the naive-method. Now we double the amount of data, what happens? The processing time for **both** methods will double!

In both cases, the performance of the algorithms will double for each doubling of the data volume. This is what makes **both** algorithms $ O(n) $ complexity. From Wikipedia:

In computer science, big O notation is used to classify algorithms by how they respond (e.g., in their processing time or working space requirements) to changes in input size.

Reversing the argument, by suggesting the skip-method has $O left( frac{n}{2} right) $ complexity, I would then expect that, if I double the data, the execution time would only increase by a half, or 50%. This is ‘obviously’ not true for the skip-method (nor the naive-method).

Both methods have $O(n)$ complexity because they both scale the same way with increasing volumes of data.

But, just because they scale the same way, does not mean that one method is not better than the other… **obviously**.

The problem is in $O(n)$. Consider the case that 200_success describes.

You have a sequence of alternating 1 and 2’s where a single 1 is replaced by a 3.

When you are asked to search for a 3 you know, after inspecting the first element, that it will have an even index. But if every odd index holds a 2 then any even index can hold a 3, so you can not guarantee that the 3 is not in the last index you search. This means that you will have to look in $O(dfrac{n}{2})$ places. $O(dfrac{n}{2})$ = $O(n)$, so the problem is $O(n)$.

No correct algorithm can have better worst time performance than this.

A more interesting question is what happens if you know that no number occurs more than some fixed upper bound.

You never need to look backwards if you start at 0.

Proof by induction over algorithm steps j:

For j = 0, i(j) = 0, you cannot go backwards.

For j > 1, there are two cases: First one, we found our number. Second one, there is a difference `diff(j) = abs(elem - array[i(j)])`

. Then no number in `array[i(j),i(j)+diff)`

can contain elem. By induction hypothesis, no element in `array[i(j-1),i(j)=i(j-1)+diff(j-1))`

contains that number either. So the next possible index for i is `i(j)+diff(j) (= i(j+1))`

But that algorithm really is in O(n) as Taemyr already answered.