I think what your are seeing here is a fear that since all pollsters don't see all teams, that if pollsters have to evaluate 3 or 4 teams that have all chipped away at each other, that they might favor the ones they have seen play, which would most likely be in their same conference (a.k.a conference bias) and put those on top. 4 spots can take you from 10th to 14th and get bumped by AQ's so the fear is real whether it is true or not.
1 suggestion I have for any poll based system to help reduce that--
In graph style (bar) show the distribution of ranks voted for a team
There is always a push to release how people voted. I don't think that people really care how 1 particular person voted, but they want to see is there any group that is inflating a teams rank as compared to the rest of the group (i.e. a bimodal distribition) or if it is a bell shape, or if there is 1 or 2 extreme outliers.
If both UCSB had a bell shape distribution around 8 with a few higher and a few lower, and FSU the same thing at 11, no one would be arguing because you would be arguing with the collective thought of 40 people. If UCSB had 5 guys place them at #4 and the rest at 11 or 12 (which could propel them to
then there would be an argument for pollster evaluations.
People want to see if there is agreement within the pollsters - averages and standard deviations don't do that.