Hasattended anICMLconferencebeforeCan namethreedifferent LLMarchitecturesIs interestedin the ethicalimplicationsof generativeAIHas used agenerative AImodel for anon-academicpurposeHas used anLLM tosummarizeresearchpapersKnows atleast threeprogramminglanguagesIs currentlyworking on aprojectinvolving cross-lingual transferlearningCanrecommenda good AI ortech relatedpodcastHas collaboratedon a researchpaper withsomeone from adifferent continentHasparticipated ina hackathonfocused on AIor LLMsHassuccessfullydebugged acomplexLLMHas presenteda paper onnaturallanguagegenerationHaspublishedresearch onmultilingualLLMsIs familiarwith theconcept ofpromptengineeringHas apreferred AIresearch toolthey canrecommendHascontributedto an open-source AIprojectHas used agenerative AImodel tocreate art ormusicIs optimisticabout thefuture ofhuman-AIcollaborationHas traveledinternationallyto attend thisconferenceCan explain thedifferencebetween causaland maskedlanguagemodelsHas experiencewith low-resourcelanguages inNLPIs excitedabout thepotential ofLLMs ineducationHas learneda newlanguage inthe last yearHasexperiencewith fine-tuning a pre-trained LLMHasattended anICMLconferencebeforeCan namethreedifferent LLMarchitecturesIs interestedin the ethicalimplicationsof generativeAIHas used agenerative AImodel for anon-academicpurposeHas used anLLM tosummarizeresearchpapersKnows atleast threeprogramminglanguagesIs currentlyworking on aprojectinvolving cross-lingual transferlearningCanrecommenda good AI ortech relatedpodcastHas collaboratedon a researchpaper withsomeone from adifferent continentHasparticipated ina hackathonfocused on AIor LLMsHassuccessfullydebugged acomplexLLMHas presenteda paper onnaturallanguagegenerationHaspublishedresearch onmultilingualLLMsIs familiarwith theconcept ofpromptengineeringHas apreferred AIresearch toolthey canrecommendHascontributedto an open-source AIprojectHas used agenerative AImodel tocreate art ormusicIs optimisticabout thefuture ofhuman-AIcollaborationHas traveledinternationallyto attend thisconferenceCan explain thedifferencebetween causaland maskedlanguagemodelsHas experiencewith low-resourcelanguages inNLPIs excitedabout thepotential ofLLMs ineducationHas learneda newlanguage inthe last yearHasexperiencewith fine-tuning a pre-trained LLM

Human BINGO: Navigating Generative AI and LLMs Across Languages - Call List

(Print) Use this randomly generated list as your call list when playing the game. There is no need to say the BINGO column name. Place some kind of mark (like an X, a checkmark, a dot, tally mark, etc) on each cell as you announce it, to keep track. You can also cut out each item, place them in a bag and pull words from the bag.


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
  1. Has attended an ICML conference before
  2. Can name three different LLM architectures
  3. Is interested in the ethical implications of generative AI
  4. Has used a generative AI model for a non-academic purpose
  5. Has used an LLM to summarize research papers
  6. Knows at least three programming languages
  7. Is currently working on a project involving cross-lingual transfer learning
  8. Can recommend a good AI or tech related podcast
  9. Has collaborated on a research paper with someone from a different continent
  10. Has participated in a hackathon focused on AI or LLMs
  11. Has successfully debugged a complex LLM
  12. Has presented a paper on natural language generation
  13. Has published research on multilingual LLMs
  14. Is familiar with the concept of prompt engineering
  15. Has a preferred AI research tool they can recommend
  16. Has contributed to an open-source AI project
  17. Has used a generative AI model to create art or music
  18. Is optimistic about the future of human-AI collaboration
  19. Has traveled internationally to attend this conference
  20. Can explain the difference between causal and masked language models
  21. Has experience with low-resource languages in NLP
  22. Is excited about the potential of LLMs in education
  23. Has learned a new language in the last year
  24. Has experience with fine-tuning a pre-trained LLM