Generative AI Policy

I don’t use generative AI or LLMs for any purpose at this time. You can trust that no artifacts that I author (code, or text)

I don’t have a specific objection to these tools; I try to follow the field fairly closely and I’m overall ambivalent on the whole topic. I don’t have any coherent dogma.

My reasoning as follows:

  • Per a variety of metrics, the conventional resources I’m using now seem to be working reasonably well.
    • I do believe that as a taxpayer-funded researcher I have an obligation to the public to use the best tools I can , not just what I happen to know. So this might be a bit of a cop-out.

I’ve seen people pooh-pooh the next generation in ways that I think are not reasonable - e.g. telling off a grad student for watching a training video on a topic rather than reading the textbook, and I don’t want to precipitate and continue that tradition. however, this is my internal position at this time.

https://en.wikibooks.org/wiki/Wikibooks_talk:Artificial_Intelligence#Use_of_LLMs_for_this_policy_(and_evidence_of_issues)

nice slide deck.

helen reaction

ancilliary to all the discussion about the outputs, there’s the whole “scraperbots run amok” chilling effect.

https://arxiv.org/pdf/2402.08021

trying to make arguments about the energy efficiency or water use of AI are a bit on the nose, because look at the energy efficiency of the CLS. maybe 600 N shifts a year, 10 megawatts continuously - 146 megawatt-hours per shift.

On the other hand, what’s the power consumption of Rogers Center / SkyDome? Looks like it’s roughly in the same order of magnitude.

Carbon intensity is 630