Well, that really depends on how the list is implemented and what kind of operations (random read/write, sequential read/write, increasing/decreasing the capacity). If the list is built using arrays, then the random access to the list (that is getting a value from the list by its position) is faster. That’s because the access is a jump instruction to a memory location instead of what a dictionary does.
A dictionary is an hash map. An hash map is like an array but instead of using an ordered sequence of integers as indices for the elements in the list, it uses integers computed using an “hashing” function applied to a value that represents the key (it can be a sequence of characters, a number, an object, it depends on the library).
The hashing function, given the key, computes a memory address that is the location of the associated value in the hash table/map or dictionary.
This sort of data structure, this dictionary, is commonly said to be “faster” than a linked list made with memory nodes because the computational complexity of accessing a random value inside the dictionary is constant (O(1)), which means that the total amount of operations that the computer has to do to find the value is constant - usually quasi constant.
The same operation, with the aforementioned list, has a O(N) complexity, where N is the numerical order of the element in the list relative to the first node (that is 1 for the first element, 2 for the second, 3 for the third…N for the Nth), because to access the Nth element you have to go to the first one, then jump to the second, then to the third (all this to read the memory address of the next node) and so on until you reach the requested index - the integer index acts as a key for the value.
If the list is implemented using an array - an array being a sequence of memory blocks all of the same size - then jumping to a random element in the list has the same complexity as accessing an element of the dictionary, because you compute the memory position multiplying the index by the size of the memory blocks (which is an hash function by the way).
Bear in mind that you won’t really notice any difference until the data size reaches a certain threshold, modern cpus have big predictive caches that make jumping around close memory locations ludicrously fast