AI FAQ

At NIC, AI tools are trans­form­ing teach­ing, learn­ing, and work. This resource brings togeth­er impor­tant ques­tions, insights, and key con­sid­er­a­tions for instruc­tors and stu­dents to nav­i­gate the use of AI in the class­room.

It’s cru­cial to estab­lish clear and ongo­ing com­mu­ni­ca­tion with stu­dents about the evolv­ing role of AI tools in your course. Stu­dents will encounter dif­fer­ent expec­ta­tions across cours­es and assign­ments, so trans­paren­cy is key. Include a course-lev­el state­ments in your out­line and use NIC’s Course Out­line Chart [Word doc­u­ment], along with spe­cif­ic guide­lines that are aligned with NIC’s six laneway approach for each assign­ment.

Things to con­sid­er when draft­ing your state­ment:

  • If, how, and when AI tools are allowed
  • The ratio­nale behind these deci­sions and how the use of AI sup­ports or con­flicts with course goals and learn­ing out­comes
  • Any stu­dent respon­si­bil­i­ties, such as cita­tion require­ments

In addi­tion to these writ­ten state­ments, your course and assign­ment-lev­el AI guide­lines should be reviewed with stu­dents to ensure under­stand­ing and to pro­vide an oppor­tu­ni­ty for dia­logue and ques­tions.

Cur­rent­ly, Microsoft Copi­lot with Enter­prise data pro­tec­tion (dis­tinct from Microsoft 365 Copi­lot) is the only AI tool approved for use at NIC; how­ev­er, this may change as addi­tion­al tools under­go review. It is avail­able to all fac­ul­ty, staff, and stu­dents through NIC’s M365 license.

To access this tool, users can log in at https://copilot.cloud.microsoft/ or use the Microsoft Edge brows­er side­bar with their NIC cre­den­tials. A shield or pro­file name will indi­cate if users are logged in cor­rect­ly, ensur­ing pri­va­cy and data pro­tec­tion. How­ev­er, this secure con­nec­tion is only avail­able when users are logged in.

If you per­mit AI use in course­work, you should also ensure that stu­dents know how to appro­pri­ate­ly acknowl­edge use of these tools. See NIC’s Library AI guide. You may also choose to have stu­dents pro­vide an appen­dix to their work show­ing prompts and out­puts or com­plete a Stu­dent AI Dis­clo­sure Form [Word] [PDF] to attach to assess­ment sub­mis­sions.

The use of Microsoft Copi­lot and oth­er AI tools does not auto­mat­i­cal­ly con­sti­tute aca­d­e­m­ic mis­con­duct at NIC. Whether AI tools are per­mit­ted in a course is a deci­sion made at the course or pro­gram lev­el. Instruc­tors should clear­ly com­mu­ni­cate expec­ta­tions with stu­dents ear­ly in the term, such as in the syl­labus.

If an instruc­tor pro­hibits the use of AI tools for assign­ments, using them would be con­sid­ered aca­d­e­m­ic mis­con­duct. If AI tools are per­mit­ted, instruc­tors should spec­i­fy lim­i­ta­tions, prop­er acknowl­edg­ment, and accept­able usage. If AI usage has not been addressed, it may be con­sid­ered unau­tho­rized, as per aca­d­e­m­ic mis­con­duct poli­cies (e.g., using unap­proved resources or pla­gia­rism). Stu­dents should seek clar­i­fi­ca­tion from their instruc­tor if it’s not spec­i­fied.

Stu­dents should not assume all tech­nolo­gies are per­mit­ted. If unsure about AI tools, they must ask their instruc­tor for guid­ance.

NIC’s aca­d­e­m­ic mis­con­duct pol­i­cy address­es actions that give unfair aca­d­e­m­ic advan­tage, and AI tools could fall under this if used improp­er­ly. Instruc­tors should reg­u­lar­ly address aca­d­e­m­ic integri­ty and pro­vide oppor­tu­ni­ties to dis­cuss expec­ta­tions through­out the semes­ter.

NIC strong­ly advis­es against using AI detec­tion tools on stu­dent work due to legal, ped­a­gog­i­cal, and prac­ti­cal con­cerns. Instruc­tors should not upload stu­dent work or per­son­al infor­ma­tion to unap­proved tools, as this may vio­late the Free­dom of Infor­ma­tion and Pro­tec­tion of Pri­va­cy Act (FIPPA) and the Copy­right Act.

Key con­cerns include:

  • Accu­ra­cy: AI detec­tion tools often have high false pos­i­tive rates, lead­ing to unjust accu­sa­tions and stress for stu­dents.
  • Bias: These tools may dis­pro­por­tion­ate­ly flag non-native Eng­lish speak­ers, rais­ing equi­ty issues.
  • Eva­sion: Detec­tion tools can be eas­i­ly bypassed, mak­ing results unre­li­able.
  • Rapid Advance­ment: AI tech­nol­o­gy evolves quick­ly, and detec­tion tools strug­gle to keep up.
  • Trans­paren­cy: Most tools don’t explain why con­tent is flagged, leav­ing stu­dents with no recourse.


Cur­rent­ly, NIC does not sup­port AI detec­tion tools, and fac­ul­ty are encour­aged to design assess­ments that focus on process and orig­i­nal­i­ty to main­tain aca­d­e­m­ic integri­ty.

Instruc­tors who sus­pect that a stu­dent has used AI tools con­trary to expec­ta­tions should fol­low the stan­dard aca­d­e­m­ic mis­con­duct process. If an instruc­tor has a sus­pi­cion based on the student’s work, they should fol­low the pro­ce­dure as they would for any mis­con­duct alle­ga­tion. Instruc­tors should not rely on AI detec­tors to form the basis of an alle­ga­tion of aca­d­e­m­ic mis­con­duct. If you have any ques­tions, com­ments, or con­cerns, please email the Chair of the Aca­d­e­m­ic Integri­ty Com­mit­tee.

The only way to know if AI is per­mit­ted for your course assign­ments is to check with your instruc­tor and con­sult your course out­line. If you are unsure, do not assume AI use is allowed. As AI tech­nol­o­gy evolves, it is increas­ing­ly inte­grat­ed into var­i­ous tools, such as Copi­lot in Microsoft Word. While using AI for non-aca­d­e­m­ic tasks like draft­ing emails or cre­at­ing resumes may be appro­pri­ate, its use in aca­d­e­m­ic work—such as assign­ments, essays, or exams—is not allowed unless explic­it­ly stat­ed by your instruc­tor in the syl­labus or exam instruc­tions.

Even if your instruc­tor per­mits the use of AI, you must prop­er­ly cite it to main­tain aca­d­e­m­ic integri­ty. Trans­paren­cy about the tools you use is essen­tial. Keep a record of the prompts you entered to gen­er­ate AI out­put for your course­work and ensure prop­er cita­tion. It’s impor­tant to demon­strate eth­i­cal and respon­si­ble use of AI in stu­dent-fac­ing mate­ri­als. Any con­tent gen­er­at­ed entire­ly or par­tial­ly by AI should be prop­er­ly cit­ed. See AI at NIC on the NIC Library web­site for guid­ance on cit­ing AI in APA and MLA.

In every course, your instruc­tor sets learn­ing out­comes and your grade reflects how well you achieve them. NIC’s six laneways for assess­ment out­line dif­fer­ent lev­els of AI use, rang­ing from no AI assis­tance to full inte­gra­tion. If AI is not per­mit­ted for an assign­ment, using it is con­sid­ered cheat­ing and may result in dis­ci­pli­nary action under NIC’s Code of Con­duct Pol­i­cy #3–06. This is because it inter­feres with your instructor’s abil­i­ty to assess your knowl­edge and is unfair to stu­dents who fol­low the rules. Always check your course syl­labus or ask your instruc­tor to deter­mine what lev­el of AI use, if any, is allowed.

AI can be a valu­able tool for effi­cient­ly gath­er­ing infor­ma­tion from large datasets, but its out­put may con­tain errors, out­dat­ed infor­ma­tion, false ref­er­ences, or bias­es. Users must crit­i­cal­ly eval­u­ate AI-gen­er­at­ed con­tent to ensure accu­ra­cy. Stu­dents per­mit­ted to use AI in their course­work should ver­i­fy infor­ma­tion by using mul­ti­ple prompts to gain dif­fer­ent per­spec­tives and be mind­ful of poten­tial bias­es. As with any tool, AI is most effec­tive when used respon­si­bly.

Con­fi­den­tial infor­ma­tion is data not intend­ed for pub­lic use. Per­son­al infor­ma­tion (PI) includes any record­ed data that iden­ti­fies an indi­vid­ual, such as names, con­tact details, stu­dent num­bers, aca­d­e­m­ic his­to­ry, and finan­cial infor­ma­tion. Stu­dent assign­ments may also con­tain PI and con­sti­tute intel­lec­tu­al prop­er­ty.

AI tools may col­lect and store var­i­ous types of data, includ­ing log data (IP address, brows­er set­tings), usage data (loca­tion, con­tent gen­er­at­ed), and ses­sion inter­ac­tions. Any per­son­al infor­ma­tion entered may be stored, used for AI train­ing, or shared with third parties—often out­side Canada—raising pri­va­cy con­cerns.

Copy­right­ed infor­ma­tion includes works you do not own or have per­mis­sion to use, such as jour­nal arti­cles, text­books, and teach­ing mate­ri­als. Upload­ing such con­tent into AI tools may vio­late copy­right laws.