Plastic bag bans are all the rage. It’s not the purpose of this post to take a position on the matter. Before you do, however, I’d recommend checking out this:
and especially this:
and the Woods Hole Oceanographic Institution has many articles about plastics in the oceans.
Good modern governance means having evidence-based decisions. So, if a bag ban of any kind, or a bag tax of any kind is going to be imposed, it makes sense to assess how much and what kinds of use of bags are prevalent before the ban or tax, and how this changes after the ban or tax. This kind of thing used to need to be done with professional surveyors and statisticians. But with the availability of online datasets, access to the experience of others, widely available and open-source computing, and new survey technology and methods, expensive professional options aren’t the only way this can be done. Professional surveyors tend to argue otherwise. But, facts are, you can learn a lot by using Google Earth and Google Maps these days.
Surveys are designed around answering specific questions. If the objective is to estimate how many bags of one kind or another are being consumed per week in a town or county, that’s one question. If the objective is to estimate how many people regularly choose paper over plastic, or bring-their-own-bags, that’s an entirely different question. The governance and the group need to choose what’s important to them.
Surveys are also designed around the skillsets of the people involved in conducting them. With a volunteer organization, it is important that the procedure be something they can readily be trained in, and I say “trained” because no survey can do without training, however simple.
Surveys also ought to be easy on the surveyors, especially if they are volunteers. The requirements of when they need to be on stations oughtn’t be so onerous that they might not arrive on time, or not show up at all, and, worse, misrepresent to the group what happened. So, for instance, even if there are shoppers using bags in a store at 6:00 a.m., it’s probably not going to get covered well if a sampling plan were to require it.
Surveys also ought to be easy to explain to those who want to know how they were done. Along with this, it is critically important that, as part of an analysis of the primary quantities of interest, like plastic bags used per week, the survey’s contribution to overall uncertainty is quantified.
All that noted, there is a lot interested and committed citizens can do to gather data like this, and interpret results. This can be important, whether or not a town or county governance consider its findings as inputs. It can serve as a check on their result. It can also serve as a check on their budget, meaning why did the pay for some expensive professional organization to do something when something good enough for the purpose could have been had much cheaper? That said, any old surveying or sampling technique which appears to be good enough isn’t good enough. That is, there is some training and learning involved.
Returning to the bag ban matter, use of bags is key to the project. As with any policy, if a regulation is imposed and there is no evidence it helps or has untoward consequences, it ought to be revoked. To do that means measuring a baseline, and then measuring after the regulation is in place. It probably is a good idea to measure a couple of times after the regulation is in place. In statistics and engineering, this general approach is called A/B testing, which is explained better here. As mentioned above, how one gets counts — they are nearly always counts — and then analyzes them depends very much on the question being asked.
But, in the case of bags, there’s the really important question of where and when to sample. In this case, I’m setting aside bags given out in stores other than grocery stores. And, in this case, I’m using the example of my home town, Westwood, Massachusetts.
Counting noses or bags assumes there’s a sampling frame in hand. In Westwood’s case, the concern is the population of residents or visitors frequenting local grocery stores. And in Westwood’s case, there’s a desire to count people and their preferences for bags, whether plastic, paper, some mix, or whether they bring their own bags and some mix, as well as size of order. So this means counting people.
There are three grocery stores in Westwood: Roche Brothers in Islington, Wegman’s at University Station, and Lambert’s. There are convenience and other stores which sell small amount of groceries, but these were assumed to show behaviors which would be exhibited by the populations of the three majors. But, still, surveying these stores either demands deep cooperation on the part of their owners, an outrageous commitment on the part of volunteers, or a sampling scheme that is constructed with knowledge of who goes where when. Where to find such a thing?
Most substantial grocery store entries on Google Maps now present a bar chart of when they are most frequently visited. It looks like this:
Now, I’ve discovered that that dashed line is a fixture marking a certain number of visits per hour. It is constant for a given store across days of the week. And it is at least roughly consistent across stores in an area. This is great. This was helped, in part, by a visit to one of the stores by a volunteer to take data for a half hour. She was collecting data for me, and was also trying out a data collection form and seeing how difficult it was or easy to get the kind of data that was pertinent. Doing this is an excellent idea.
But, wait, you say: These aren’t numbers. It’s a bar chart.
Digitizing. I learned this when I took courses in Geology. An amazing amount of data is recovered by digitizing figures in scientific journals. Why not Google?
There are several digitizing applications out there. I’ve tried a couple and, so far, I like WebPlotDigitizer best. So, I did. Digitize, that is. How?
Here I’ve marked, by hand, two points on the bar chart, attempting to ascertain the height of the dashed line in pixels from the baseline. Note the original images aren’t produced to the same resolution or size, so it’s important to calibrate each one. In the upper right you can see WebPlotDigitizer‘s close-up of the place where the cursor is. That’s a little hard to see, since there’s so much real estate there, but here’s a close-up of the bottom:
And here’s a close-up of the upper right, a close-up of a close-up:
The completed digitization looks like this:
and results in a
.csv file which looks partly like:
There look to be extra points in the digitization, which I’ll explain. It is important to note that the code I reference later which is available to the public demands digitization be done in this style. That code has no other documentation. I don’t give a recipe. That said, it’s not difficult to figure out.
The first point I take is the baseline, not in any of the bars of the bar plot. The next point I take is on the dashed upper score. I then do two bars, taking the baseline of the bar, and the upper horizontal of the bar. The rest of the bars have only their upper horizontal marked. The point is to get a good estimate of the baseline, obtained as an average of three baseline observations, the initial outside of bars, and then from two bars. There is one observation on the upper score, and then there are observations of the upper horizontals for the rest of the bars.
The heights of the bars can be estimated from the difference between the reading for their upper bars and the estimate of the baseline as the mean of three observations. These can be divided by the distance between the estimate of baseline and the upper score in order to calculate a portion of the range to upper score.
Note that because these are pixel coordinates, the ordinate values of the observations higher up on the bar plot are lower in coordinates than, say, the baselines. This is because distances on the ordinate are measured (ultimately) as pixels from the top of the image. Accordingly, some distance calculations need to have their signs reversed.
The accompanying R code reads in these
.csv files and then extracts the heights for each of the hours in a day. There is a system for the Westwood case which you can understand by reading the code where each of the separate stores’ files are assimilated into a single corresponding matrix of scored versus hour of day and day of week.
In the end what’s in hand is a matrix of values proportional to numbers of visits to the stores. Calibration and actual counts have indicated that a value of unity corresponds to about 140 visitors.
Now that traffic to stores is available, or at least, something proportional to traffic, it is a matter of constructing a sampling plan. A plan which is proportional to traffic makes the most sense. This is equivalent to sampling time intervals where probability of electing an interval is proportional to the estimated traffic in the interval. For this study, the surveyors expressed a desire not to be surveying more than 60-90 minutes at a time. I settled for 60 minutes. So the question became one of finding a set of samples of individual hours for a store weighted by the probability of traffic.
melt.array function of the R reshape2 package was handy here, and I was able to use the sampling-without-replacement of the R built-in
sample to achieve the appropriate election. The volunteers had a strict constraint on the total number of times they wanted to visit stores. The code in the R file
generateSamplingPlan.R produces several options, based upon the setting of the
N.stage1 variable. They also did not want to survey before 9:00 a.m. and after 10:00 p.m.
The result is a sampling plan which looks a little like this:
The code and data supporting this post is available in a repository. Note that it is live, and exists to support an ongoing project, so there is no promise of stability. Note, however, that it is subject to Google’s version controls system.
So, what happens after the regulation or ordinance is adopted? What’s the sampling plan to find out how things are going?
At first it seems that simply repeating the days and times would be the best. On the other hand, remember that the sampling plan designed was intended to expose data collection to as representative a set of people as could be had given the constraints on that sampling. So, in principle, it shouldn’t harm at all to generate new sampling plans with the same constraints, ones which, invariably, will give other times and days. They are all vehicles for getting at what the population prefers.