The ISSN Register was created by UNESCO and France in the 1970s to index and identify analogue and digital serial publications, i.e. journals, newspapers, magazines, and later, websites and databases worldwide to foster scientific cooperation at a time of Cold War between the Western and Soviet blocs. The technology available at that time did not allow wide access to this database which was made available to Member States and subscribers. Despite the emergence of the internet and the world wide web and the web 2.0, this situation continued until 2013, when the ROAD database of open access scientific resources was made available on the web. This movement to free up ISSN data has accelerated under the impetus of a new management and thanks to the support of member countries, leading to the opening of the ISSN Portal in 2018. Today, the ISSN Portal offers a suite of services to libraries, publishers and the information industry that aims to trace as accurately as possible the trajectory of serial publications from their birth to their long-term preservation. This presentation is intended to provide an overview of the progress made since the opening of ROAD in 2013 and an outline of the 2024 strategy.
A talk given at 'Taking the Long View: International Perspectives on E-Journal Archiving', a conference hosted by EDINA and ISSN IC at the University of Edinburgh, September 7th 2015.
The Collections UofT Repository and Enterprise Content ManagementKellliBee
The Collections UofT Repository and Enterprise Content Management - use cases from archivists' perspectives for the Islandora digital collections platform.
The document discusses the purposes of web archives according to various national web archives. It identifies four main purposes: 1) Preservation of digital cultural heritage, with examples of archives focused on preserving websites as part of a country's heritage. 2) Responding to the risk of "digital dark ages" by selecting websites to archive. 3) Allowing viewing of past versions of websites to see their evolution over time. 4) Supporting future research by providing a source of information about society and the development of the web.
The document summarizes a presentation given on the Collections UofT Repository and Enterprise Content Management. It introduces Collections UofT as a platform that takes an enterprise content management approach to managing digital projects and assets across the University of Toronto in a collaborative way. Several use cases are described, including the UTARMS archives, digitized Nouwen family photograph albums, and sharing metadata between repositories using OAI-PMH. Challenges with the system are discussed along with potential solutions.
The Needs of Archives: 16 (simple) rules for a better archival managementTom Cobbaert
This document provides guidance on archival management through 16 rules organized under 6 themes: context, appraisal, arrangement, preservation, digitization, and access. It emphasizes the importance of understanding the history and use of archives, following national retention policies, avoiding unnecessary conservation through good storage practices, using digitization to increase access while preserving originals, and developing transparent access policies. While archival management is complex, these rules provide guidelines to help archives fulfill their roles in supporting rights, governance, history, and policy.
Human Scale Web Collecting for Individuals and Institutions (Webrecorder Work...Anna Perricci
This is the main slide deck for a workshop at iPRES 2018 on human scale web collecting. A primary focus of the presentation was the use of Webrecorder.io, a free, open source web archiving tool available to all.
Web archiving is the process of collecting portions of the World Wide Web for preservation and future access. It involves using web crawlers like Heritrix to automatically collect and archive web pages and sites. Large organizations like the Internet Archive and national libraries participate in web archiving to preserve culturally and historically important web content. However, web archiving faces challenges due to the scale of the web, rapid changes, and intellectual property issues.
The document summarizes the Web@rchive Austria project. It discusses how the Austrian National Library archives websites on the internet, including major domains in Austria and important websites that change regularly. It notes some of the challenges in web archiving like the short lifespan of webpages and how content selection must be careful. Examples of archived websites are provided from different time periods to show the project history.
Archiving for Now and Later - workshop at Common Field Convening 2019Anna Perricci
- Webrecorder is open source software that allows users to archive web pages in an interactive format, preserving elements that crawlers often miss like dynamic content. It provides a free tool for "archiving at a human scale" by capturing pages intentionally.
- True archiving requires more than just saving files - it involves appraisal, description, preservation, and access. Webrecorder helps with this process by allowing users to manage and share their archived collections online.
- While useful for individual archiving needs, Webrecorder is also working to improve tools for long-term stewardship of collections and address challenges around ethics, sustainability, and preserving rapidly changing websites.
This document provides an overview of archival technologies presented at the 46th Annual Georgia Archives Institute on June 10-21, 2013. The presentation introduces various archival management tools like Archon and Archivists' Toolkit for managing archival collections. It also discusses digital collection management software such as CONTENTdm and Islandora. Emerging standards, formats and linked open data initiatives are also covered. The goal is to help archivists identify existing and new technologies that can help manage and provide access to archival materials.
This document summarizes a presentation on Memento and web archiving. It discusses Memento, which aims to make navigating archived web pages easy. It also introduces SiteStory, an alternative method of web archiving that archives pages as users access them rather than through crawling. Finally, it discusses applying Memento to linked data by creating archives of DBpedia versions.
Leslie Johnston: Challenges of Preserving Every Digital Format, 2012lljohnston
The document discusses some of the challenges the Library of Congress faces in collecting and preserving digital content. It receives content in a wide variety of formats from different programs and partners. These include digitized newspapers, web archives, audiovisual content, tweets, and electronic publications. The Library uses various strategies to help manage this complex task, such as file format standards, multiple copies in different locations, and partnerships with other institutions. However, the diversity of formats and sources means preserving every digital format is extremely challenging.
Web archiving challenges and opportunitiesAhmed AlSum
The document discusses challenges and opportunities in web archiving. It outlines the key stages in the web archiving lifecycle including selection of content, harvesting techniques, storage formats and infrastructure, ways to provide access, and the role of community. Specific challenges are discussed such as representing dynamic and social media content, optimizing storage solutions, and addressing limitations of current access interfaces. Opportunities exist in focusing collection efforts on underrepresented regions, leveraging existing archived data, and developing innovative services and tools to support researchers.
This document discusses two digital library software systems: Greenstone and DSpace.
[1] Greenstone and DSpace allow librarians to build their own digital collections and customize them for their needs. Both systems aim to make it easy for others to build comprehensive digital libraries.
[2] The document describes the key features and functions of each software, including advantages like being open source and customizable, as well as disadvantages like technical knowledge requirements.
[3] Options for integrating the two systems are explored, including using the OAI-PMH protocol, the METS standard, or developing a direct bridge between the software like the StoneD module.
This document discusses various strategies and resources for archiving internet content for research purposes. It describes several existing large-scale web archives like the Internet Archive and Common Crawl, as well as national and institutional archives. It also outlines how researchers can collect targeted web archives using open-source tools or subscription-based services.
Web 3.0 is the current stage of the internet focused on machine-facilitated understanding to provide a more intuitive experience through personalized and intelligent search. Web 4.0 will be an open, linked, and intelligent web that functions similarly to the human brain. The document also defines electronic resources as information stored digitally, including e-books, e-journals, databases, and digital libraries. It discusses the need for and types of electronic resources as well as some issues with licensing, intellectual property rights, and infrastructure.
An hour lecture with hands-on on how to install the GREENSTONE DIGITAL LIBRAY. The seminar was sponsored by Baguio-Benguet Librarians Association, Inc. and conducted at the University Of the Cordilleras Library on July 19 & 20, 2010
Ipres2013 panel: Web Archiving – Lessons and Potential. This presentation highlights the main lessons learned while developing the Portuguese Web Archive and its potential use as an infrastructure for research.
This document provides guidance on building a digital archive using free or low-cost tools. It recommends using Internet Archive for unlimited file hosting and LibGuides for a centralized access point. Specific steps are outlined for setting up a collection on Internet Archive, adding files, and publishing the archive through LibGuides. Budget, reliability, and ease of use of Internet Archive are also discussed.
This document discusses how libraries can use Web 2.0 and 3.0 technologies to share and control their data. It covers topics like social tagging, sharing bibliographic data through open licenses, and allowing reuse and remixing of content. The document also discusses emerging technologies like semantic web and microformats that could allow machines to better interpret library content. It encourages libraries to engage with users on social networks and consider how to provide mobile services.
Kris Carpenter Negulescu Gordon Paynter Archiving the National Web of New Zea...Future Perfect 2012
This document summarizes lessons learned from archiving the New Zealand web domain. It discusses the legal requirements to archive internet documents, two approaches used - selective and domain harvesting. Challenges include defining a national domain, harvest scope and shape, policies, infrastructure needs, assessing quality, sustainability and being responsive. Final thoughts are on New Zealand facing similar challenges to peers and benefits of collaboration between institutions.
Slides for Web Archiving in the Heritage and Archive SectorsAnna Perricci
Webrecorder is a free and open source web archiving tool that allows users to create high-fidelity interactive captures of web pages. It aims to make web archiving accessible to all. The tool captures pages as users see them, including interactive elements. Captures can be browsed like live web pages. Webrecorder seeks to improve functionality through automation, search capabilities, and curation tools. It also works to establish sustainable funding models and provide training to support web archiving practices.
Presentada en la Jornada Internacional sobre Archivos Web y Depósito Legal Electrónico, en la Biblioteca Nacional de España (BNE), el día 9 de julio de 2013.
"Creating and Maintaining Web Archives"
Presented by Joanne Archer (University of Maryland), Tessa Fallon (Columbia University), Abbie Grotke (Library of Congress), and Kate Odell (Internet Archive)
Website Archivability - Library of Congress NDIIPP Presentation 2015/06/03Vangelis Banos
Website Archivability (WA) captures the core
aspects of a website crucial in diagnosing
whether it has the potentiality to be archived
with completeness and accuracy.
Web archiving is the process of collecting portions of the World Wide Web for preservation and future access. It involves using web crawlers like Heritrix to automatically collect and archive web pages and sites. Large organizations like the Internet Archive and national libraries participate in web archiving to preserve culturally and historically important web content. However, web archiving faces challenges due to the scale of the web, rapid changes, and intellectual property issues.
The document summarizes the Web@rchive Austria project. It discusses how the Austrian National Library archives websites on the internet, including major domains in Austria and important websites that change regularly. It notes some of the challenges in web archiving like the short lifespan of webpages and how content selection must be careful. Examples of archived websites are provided from different time periods to show the project history.
Archiving for Now and Later - workshop at Common Field Convening 2019Anna Perricci
- Webrecorder is open source software that allows users to archive web pages in an interactive format, preserving elements that crawlers often miss like dynamic content. It provides a free tool for "archiving at a human scale" by capturing pages intentionally.
- True archiving requires more than just saving files - it involves appraisal, description, preservation, and access. Webrecorder helps with this process by allowing users to manage and share their archived collections online.
- While useful for individual archiving needs, Webrecorder is also working to improve tools for long-term stewardship of collections and address challenges around ethics, sustainability, and preserving rapidly changing websites.
This document provides an overview of archival technologies presented at the 46th Annual Georgia Archives Institute on June 10-21, 2013. The presentation introduces various archival management tools like Archon and Archivists' Toolkit for managing archival collections. It also discusses digital collection management software such as CONTENTdm and Islandora. Emerging standards, formats and linked open data initiatives are also covered. The goal is to help archivists identify existing and new technologies that can help manage and provide access to archival materials.
This document summarizes a presentation on Memento and web archiving. It discusses Memento, which aims to make navigating archived web pages easy. It also introduces SiteStory, an alternative method of web archiving that archives pages as users access them rather than through crawling. Finally, it discusses applying Memento to linked data by creating archives of DBpedia versions.
Leslie Johnston: Challenges of Preserving Every Digital Format, 2012lljohnston
The document discusses some of the challenges the Library of Congress faces in collecting and preserving digital content. It receives content in a wide variety of formats from different programs and partners. These include digitized newspapers, web archives, audiovisual content, tweets, and electronic publications. The Library uses various strategies to help manage this complex task, such as file format standards, multiple copies in different locations, and partnerships with other institutions. However, the diversity of formats and sources means preserving every digital format is extremely challenging.
Web archiving challenges and opportunitiesAhmed AlSum
The document discusses challenges and opportunities in web archiving. It outlines the key stages in the web archiving lifecycle including selection of content, harvesting techniques, storage formats and infrastructure, ways to provide access, and the role of community. Specific challenges are discussed such as representing dynamic and social media content, optimizing storage solutions, and addressing limitations of current access interfaces. Opportunities exist in focusing collection efforts on underrepresented regions, leveraging existing archived data, and developing innovative services and tools to support researchers.
This document discusses two digital library software systems: Greenstone and DSpace.
[1] Greenstone and DSpace allow librarians to build their own digital collections and customize them for their needs. Both systems aim to make it easy for others to build comprehensive digital libraries.
[2] The document describes the key features and functions of each software, including advantages like being open source and customizable, as well as disadvantages like technical knowledge requirements.
[3] Options for integrating the two systems are explored, including using the OAI-PMH protocol, the METS standard, or developing a direct bridge between the software like the StoneD module.
This document discusses various strategies and resources for archiving internet content for research purposes. It describes several existing large-scale web archives like the Internet Archive and Common Crawl, as well as national and institutional archives. It also outlines how researchers can collect targeted web archives using open-source tools or subscription-based services.
Web 3.0 is the current stage of the internet focused on machine-facilitated understanding to provide a more intuitive experience through personalized and intelligent search. Web 4.0 will be an open, linked, and intelligent web that functions similarly to the human brain. The document also defines electronic resources as information stored digitally, including e-books, e-journals, databases, and digital libraries. It discusses the need for and types of electronic resources as well as some issues with licensing, intellectual property rights, and infrastructure.
An hour lecture with hands-on on how to install the GREENSTONE DIGITAL LIBRAY. The seminar was sponsored by Baguio-Benguet Librarians Association, Inc. and conducted at the University Of the Cordilleras Library on July 19 & 20, 2010
Ipres2013 panel: Web Archiving – Lessons and Potential. This presentation highlights the main lessons learned while developing the Portuguese Web Archive and its potential use as an infrastructure for research.
This document provides guidance on building a digital archive using free or low-cost tools. It recommends using Internet Archive for unlimited file hosting and LibGuides for a centralized access point. Specific steps are outlined for setting up a collection on Internet Archive, adding files, and publishing the archive through LibGuides. Budget, reliability, and ease of use of Internet Archive are also discussed.
This document discusses how libraries can use Web 2.0 and 3.0 technologies to share and control their data. It covers topics like social tagging, sharing bibliographic data through open licenses, and allowing reuse and remixing of content. The document also discusses emerging technologies like semantic web and microformats that could allow machines to better interpret library content. It encourages libraries to engage with users on social networks and consider how to provide mobile services.
Kris Carpenter Negulescu Gordon Paynter Archiving the National Web of New Zea...Future Perfect 2012
This document summarizes lessons learned from archiving the New Zealand web domain. It discusses the legal requirements to archive internet documents, two approaches used - selective and domain harvesting. Challenges include defining a national domain, harvest scope and shape, policies, infrastructure needs, assessing quality, sustainability and being responsive. Final thoughts are on New Zealand facing similar challenges to peers and benefits of collaboration between institutions.
Slides for Web Archiving in the Heritage and Archive SectorsAnna Perricci
Webrecorder is a free and open source web archiving tool that allows users to create high-fidelity interactive captures of web pages. It aims to make web archiving accessible to all. The tool captures pages as users see them, including interactive elements. Captures can be browsed like live web pages. Webrecorder seeks to improve functionality through automation, search capabilities, and curation tools. It also works to establish sustainable funding models and provide training to support web archiving practices.
Presentada en la Jornada Internacional sobre Archivos Web y Depósito Legal Electrónico, en la Biblioteca Nacional de España (BNE), el día 9 de julio de 2013.
"Creating and Maintaining Web Archives"
Presented by Joanne Archer (University of Maryland), Tessa Fallon (Columbia University), Abbie Grotke (Library of Congress), and Kate Odell (Internet Archive)
Website Archivability - Library of Congress NDIIPP Presentation 2015/06/03Vangelis Banos
Website Archivability (WA) captures the core
aspects of a website crucial in diagnosing
whether it has the potentiality to be archived
with completeness and accuracy.
BlogForever Crawler: Techniques and algorithms to harvest modern weblogs Pres...Vangelis Banos
Blogs are a dynamic communication medium which has been
widely established on the web. The BlogForever project has
developed an innovative system to harvest, preserve, manage
and reuse blog content. This paper presents a key component
of the BlogForever platform, the web crawler. More
precisely, our work concentrates on techniques to automatically
extract content such as articles, authors, dates and
comments from blog posts. To achieve this goal, we introduce
a simple and robust algorithm to generate extraction
rules based on string matching using the blog’s web feed in
conjunction with blog hypertext. This approach leads to a
scalable blog data extraction process. Furthermore, we show
how we integrate a web browser into the web harvesting process
in order to support the data extraction from blogs with
JavaScript generated content.
The theory and practice of Website ArchivabilityVangelis Banos
The document discusses website archivability and presents CLEAR, a method for evaluating the archivability of websites. CLEAR assesses website attributes like accessibility, cohesion, metadata, performance, and standards compliance to determine an overall archivability score. It was developed to help automate quality assurance for web archives by providing credible, live measurements of how completely and accurately a website can be archived. The authors also describe a demonstration of CLEAR called ArchiveReady.com and discuss the potential impact of evaluating website archivability for web professionals and archive operators.
CLEAR: a Credible Live Evaluation Method of Website Archivability, iPRES2013Vangelis Banos
This document presents CLEAR, a method for evaluating the archivability of websites. CLEAR assesses website attributes like accessibility, cohesion, metadata, performance, and standards compliance by performing evaluations of facets within each attribute. It generates an archivability score on a scale of 0-100% for each facet and attribute, and an overall score for the website. The document demonstrates CLEAR's implementation in a web application called ArchiveReady.com and discusses its potential benefits for web archivists and professionals to improve web archiving practices and preserve websites effectively. It also outlines some limitations and directions for future work, such as differentially weighting facet evaluations.
Data Virtualization: Bringing the Power of FME to Any ApplicationSafe Software
Imagine building web applications or dashboards on top of all your systems. With FME’s new Data Virtualization feature, you can deliver the full CRUD (create, read, update, and delete) capabilities on top of all your data that exploit the full power of FME’s all data, any AI capabilities. Data Virtualization enables you to build OpenAPI compliant API endpoints using FME Form’s no-code development platform.
In this webinar, you’ll see how easy it is to turn complex data into real-time, usable REST API based services. We’ll walk through a real example of building a map-based app using FME’s Data Virtualization, and show you how to get started in your own environment – no dev team required.
What you’ll take away:
-How to build live applications and dashboards with federated data
-Ways to control what’s exposed: filter, transform, and secure responses
-How to scale access with caching, asynchronous web call support, with API endpoint level security.
-Where this fits in your stack: from web apps, to AI, to automation
Whether you’re building internal tools, public portals, or powering automation – this webinar is your starting point to real-time data delivery.
Agentic AI Explained: The Next Frontier of Autonomous Intelligence & Generati...Aaryan Kansari
Agentic AI Explained: The Next Frontier of Autonomous Intelligence & Generative AI
Discover Agentic AI, the revolutionary step beyond reactive generative AI. Learn how these autonomous systems can reason, plan, execute, and adapt to achieve human-defined goals, acting as digital co-workers. Explore its promise, key frameworks like LangChain and AutoGen, and the challenges in designing reliable and safe AI agents for future workflows.
Sticky Note Bullets:
Definition: Next stage beyond ChatGPT-like systems, offering true autonomy.
Core Function: Can "reason, plan, execute and adapt" independently.
Distinction: Proactive (sets own actions for goals) vs. Reactive (responds to prompts).
Promise: Acts as "digital co-workers," handling grunt work like research, drafting, bug fixing.
Industry Outlook: Seen as a game-changer; Deloitte predicts 50% of companies using GenAI will have agentic AI pilots by 2027.
Key Frameworks: LangChain, Microsoft's AutoGen, LangGraph, CrewAI.
Development Focus: Learning to think in workflows and goals, not just model outputs.
Challenges: Ensuring reliability, safety; agents can still hallucinate or go astray.
Best Practices: Start small, iterate, add memory, keep humans in the loop for final decisions.
Use Cases: Limited only by imagination (e.g., drafting business plans, complex simulations).
Multistream in SIP and NoSIP @ OpenSIPS Summit 2025Lorenzo Miniero
Slides for my "Multistream support in the Janus SIP and NoSIP plugins" presentation at the OpenSIPS Summit 2025 event.
They describe my efforts refactoring the Janus SIP and NoSIP plugins to allow for the gatewaying of an arbitrary number of audio/video streams per call (thus breaking the current 1-audio/1-video limitation), plus some additional considerations on what this could mean when dealing with application protocols negotiated via SIP as well.
UiPath Community Zurich: Release Management and Build PipelinesUiPathCommunity
Ensuring robust, reliable, and repeatable delivery processes is more critical than ever - it's a success factor for your automations and for automation programmes as a whole. In this session, we’ll dive into modern best practices for release management and explore how tools like the UiPathCLI can streamline your CI/CD pipelines. Whether you’re just starting with automation or scaling enterprise-grade deployments, our event promises to deliver helpful insights to you. This topic is relevant for both on-premise and cloud users - as well as for automation developers and software testers alike.
📕 Agenda:
- Best Practices for Release Management
- What it is and why it matters
- UiPath Build Pipelines Deep Dive
- Exploring CI/CD workflows, the UiPathCLI and showcasing scenarios for both on-premise and cloud
- Discussion, Q&A
👨🏫 Speakers
Roman Tobler, CEO@ Routinuum
Johans Brink, CTO@ MvR Digital Workforce
We look forward to bringing best practices and showcasing build pipelines to you - and to having interesting discussions on this important topic!
If you have any questions or inputs prior to the event, don't hesitate to reach out to us.
This event streamed live on May 27, 16:00 pm CET.
Check out all our upcoming UiPath Community sessions at:
👉 https://community.uipath.com/events/
Join UiPath Community Zurich chapter:
👉 https://community.uipath.com/zurich/
Create Your First AI Agent with UiPath Agent BuilderDianaGray10
Join us for an exciting virtual event where you'll learn how to create your first AI Agent using UiPath Agent Builder. This session will cover everything you need to know about what an agent is and how easy it is to create one using the powerful AI-driven UiPath platform. You'll also discover the steps to successfully publish your AI agent. This is a wonderful opportunity for beginners and enthusiasts to gain hands-on insights and kickstart their journey in AI-powered automation.
Securiport is a border security systems provider with a progressive team approach to its task. The company acknowledges the importance of specialized skills in creating the latest in innovative security tech. The company has offices throughout the world to serve clients, and its employees speak more than twenty languages at the Washington D.C. headquarters alone.
Measuring Microsoft 365 Copilot and Gen AI SuccessNikki Chapple
Session | Measuring Microsoft 365 Copilot and Gen AI Success with Viva Insights and Purview
Presenter | Nikki Chapple 2 x MVP and Principal Cloud Architect at CloudWay
Event | European Collaboration Conference 2025
Format | In person Germany
Date | 28 May 2025
📊 Measuring Copilot and Gen AI Success with Viva Insights and Purview
Presented by Nikki Chapple – Microsoft 365 MVP & Principal Cloud Architect, CloudWay
How do you measure the success—and manage the risks—of Microsoft 365 Copilot and Generative AI (Gen AI)? In this ECS 2025 session, Microsoft MVP and Principal Cloud Architect Nikki Chapple explores how to go beyond basic usage metrics to gain full-spectrum visibility into AI adoption, business impact, user sentiment, and data security.
🎯 Key Topics Covered:
Microsoft 365 Copilot usage and adoption metrics
Viva Insights Copilot Analytics and Dashboard
Microsoft Purview Data Security Posture Management (DSPM) for AI
Measuring AI readiness, impact, and sentiment
Identifying and mitigating risks from third-party Gen AI tools
Shadow IT, oversharing, and compliance risks
Microsoft 365 Admin Center reports and Copilot Readiness
Power BI-based Copilot Business Impact Report (Preview)
📊 Why AI Measurement Matters: Without meaningful measurement, organizations risk operating in the dark—unable to prove ROI, identify friction points, or detect compliance violations. Nikki presents a unified framework combining quantitative metrics, qualitative insights, and risk monitoring to help organizations:
Prove ROI on AI investments
Drive responsible adoption
Protect sensitive data
Ensure compliance and governance
🔍 Tools and Reports Highlighted:
Microsoft 365 Admin Center: Copilot Overview, Usage, Readiness, Agents, Chat, and Adoption Score
Viva Insights Copilot Dashboard: Readiness, Adoption, Impact, Sentiment
Copilot Business Impact Report: Power BI integration for business outcome mapping
Microsoft Purview DSPM for AI: Discover and govern Copilot and third-party Gen AI usage
🔐 Security and Compliance Insights: Learn how to detect unsanctioned Gen AI tools like ChatGPT, Gemini, and Claude, track oversharing, and apply eDLP and Insider Risk Management (IRM) policies. Understand how to use Microsoft Purview—even without E5 Compliance—to monitor Copilot usage and protect sensitive data.
📈 Who Should Watch: This session is ideal for IT leaders, security professionals, compliance officers, and Microsoft 365 admins looking to:
Maximize the value of Microsoft Copilot
Build a secure, measurable AI strategy
Align AI usage with business goals and compliance requirements
🔗 Read the blog https://nikkichapple.com/measuring-copilot-gen-ai/
European Accessibility Act & Integrated Accessibility TestingJulia Undeutsch
Emma Dawson will guide you through two important topics in this session.
Firstly, she will prepare you for the European Accessibility Act (EAA), which comes into effect on 28 June 2025, and show you how development teams can prepare for it.
In the second part of the webinar, Emma Dawson will explore with you various integrated testing methods and tools that will help you improve accessibility during the development cycle, such as Linters, Storybook, Playwright, just to name a few.
Focus: European Accessibility Act, Integrated Testing tools and methods (e.g. Linters, Storybook, Playwright)
Target audience: Everyone, Developers, Testers
Droidal: AI Agents Revolutionizing HealthcareDroidal LLC
Droidal’s AI Agents are transforming healthcare by bringing intelligence, speed, and efficiency to key areas such as Revenue Cycle Management (RCM), clinical operations, and patient engagement. Built specifically for the needs of U.S. hospitals and clinics, Droidal's solutions are designed to improve outcomes and reduce administrative burden.
Through simple visuals and clear examples, the presentation explains how AI Agents can support medical coding, streamline claims processing, manage denials, ensure compliance, and enhance communication between providers and patients. By integrating seamlessly with existing systems, these agents act as digital coworkers that deliver faster reimbursements, reduce errors, and enable teams to focus more on patient care.
Droidal's AI technology is more than just automation — it's a shift toward intelligent healthcare operations that are scalable, secure, and cost-effective. The presentation also offers insights into future developments in AI-driven healthcare, including how continuous learning and agent autonomy will redefine daily workflows.
Whether you're a healthcare administrator, a tech leader, or a provider looking for smarter solutions, this presentation offers a compelling overview of how Droidal’s AI Agents can help your organization achieve operational excellence and better patient outcomes.
A free demo trial is available for those interested in experiencing Droidal’s AI Agents firsthand. Our team will walk you through a live demo tailored to your specific workflows, helping you understand the immediate value and long-term impact of adopting AI in your healthcare environment.
To request a free trial or learn more:
https://droidal.com/
Dev Dives: System-to-system integration with UiPath API WorkflowsUiPathCommunity
Join the next Dev Dives webinar on May 29 for a first contact with UiPath API Workflows, a powerful tool purpose-fit for API integration and data manipulation!
This session will guide you through the technical aspects of automating communication between applications, systems and data sources using API workflows.
📕 We'll delve into:
- How this feature delivers API integration as a first-party concept of the UiPath Platform.
- How to design, implement, and debug API workflows to integrate with your existing systems seamlessly and securely.
- How to optimize your API integrations with runtime built for speed and scalability.
This session is ideal for developers looking to solve API integration use cases with the power of the UiPath Platform.
👨🏫 Speakers:
Gunter De Souter, Sr. Director, Product Manager @UiPath
Ramsay Grove, Product Manager @UiPath
This session streamed live on May 29, 2025, 16:00 CET.
Check out all our upcoming UiPath Dev Dives sessions:
👉 https://community.uipath.com/dev-dives-automation-developer-2025/
GDG Cloud Southlake #43: Tommy Todd: The Quantum Apocalypse: A Looming Threat...James Anderson
The Quantum Apocalypse: A Looming Threat & The Need for Post-Quantum Encryption
We explore the imminent risks posed by quantum computing to modern encryption standards and the urgent need for post-quantum cryptography (PQC).
Bio: With 30 years in cybersecurity, including as a CISO, Tommy is a strategic leader driving security transformation, risk management, and program maturity. He has led high-performing teams, shaped industry policies, and advised organizations on complex cyber, compliance, and data protection challenges.
UiPath Community Berlin: Studio Tips & Tricks and UiPath InsightsUiPathCommunity
Join the UiPath Community Berlin (Virtual) meetup on May 27 to discover handy Studio Tips & Tricks and get introduced to UiPath Insights. Learn how to boost your development workflow, improve efficiency, and gain visibility into your automation performance.
📕 Agenda:
- Welcome & Introductions
- UiPath Studio Tips & Tricks for Efficient Development
- Best Practices for Workflow Design
- Introduction to UiPath Insights
- Creating Dashboards & Tracking KPIs (Demo)
- Q&A and Open Discussion
Perfect for developers, analysts, and automation enthusiasts!
This session streamed live on May 27, 18:00 CET.
Check out all our upcoming UiPath Community sessions at:
👉 https://community.uipath.com/events/
Join our UiPath Community Berlin chapter:
👉 https://community.uipath.com/berlin/
Introducing FME Realize: A New Era of Spatial Computing and ARSafe Software
A new era for the FME Platform has arrived – and it’s taking data into the real world.
Meet FME Realize: marking a new chapter in how organizations connect digital information with the physical environment around them. With the addition of FME Realize, FME has evolved into an All-data, Any-AI Spatial Computing Platform.
FME Realize brings spatial computing, augmented reality (AR), and the full power of FME to mobile teams: making it easy to visualize, interact with, and update data right in the field. From infrastructure management to asset inspections, you can put any data into real-world context, instantly.
Join us to discover how spatial computing, powered by FME, enables digital twins, AI-driven insights, and real-time field interactions: all through an intuitive no-code experience.
In this one-hour webinar, you’ll:
-Explore what FME Realize includes and how it fits into the FME Platform
-Learn how to deliver real-time AR experiences, fast
-See how FME enables live, contextual interactions with enterprise data across systems
-See demos, including ones you can try yourself
-Get tutorials and downloadable resources to help you start right away
Whether you’re exploring spatial computing for the first time or looking to scale AR across your organization, this session will give you the tools and insights to get started with confidence.
Exploring the advantages of on-premises Dell PowerEdge servers with AMD EPYC processors vs. the cloud for small to medium businesses’ AI workloads
AI initiatives can bring tremendous value to your business, but you need to support your new AI workloads effectively. That means choosing the best possible infrastructure for your needs—and many companies are finding that the cloud isn’t right for them. According to a recent Rackspace survey of IT executives, 69 percent of companies have moved some of their applications on-premises from the cloud, with half of those citing security and compliance as the reason and 44 percent citing cost.
On-premises solutions provide a number of advantages. With full control over your security infrastructure, you can be certain that all compliance requirements remain firmly in the hands of your IT team. Opting for on-premises also gives you the ability to design your infrastructure to the precise needs of that team and your new AI workloads. Depending on the workload, you may also see performance benefits, along with more predictable costs. As you start to build your next AI initiative, consider an on-premises solution utilizing AMD EPYC processor-powered Dell PowerEdge servers.
Improving Developer Productivity With DORA, SPACE, and DevExJustin Reock
Ready to measure and improve developer productivity in your organization?
Join Justin Reock, Deputy CTO at DX, for an interactive session where you'll learn actionable strategies to measure and increase engineering performance.
Leave this session equipped with a comprehensive understanding of developer productivity and a roadmap to create a high-performing engineering team in your company.
ECS25 - The adventures of a Microsoft 365 Platform Owner - Website.pptxJasper Oosterveld
Ad
Can you save the web? Web Archiving!
1. Can we save the web?
WEB ARCHIVING
Vangelis Banos
http://vbanos.gr/
Unconference, 9-10 Δεκεμβρίου 2013
2. Can we save the web?
• What do you mean?
• What is web archiving;
• The practical use of web archives.
• Making your own web archive.
3. What is the World Wide Web?
A huge collection of digital documents (websites) which are
stored on special computers (web servers),
interconnected with each other.
8. Why save the web?
1. More and more items are born digital only material!
2. Some websites contain unique data and valuable
information.
–
Users take action and make important decisions based
on this information.
3. The web is a live record of contemporary:
1.
2.
3.
4.
Society,
Culture,
Science,
Economy.
4. Responsibility to preserve the web.
5. Transparency is promoted by saving the web.
9. Isn’t the web already safe?
• The answer is: NOT really!
• Websites are in danger:
– Organisations that maintain them stop caring about
them,
– Organisations than maintain them cease to exist,
– Natural disasters destroy computer facilities (fires,
floods, storms, etc)
– Technical problems damage websites (bugs, computer
viruses, backup failures, hardware failures)
– Their data are tampered on purpose!!! for many
reasons (political, financial, crime, etc)
10. A major blog hosting company was shut down
by the U.S. Authorities
19. WEB ARCHIVING
The process of collecting portions of
the World Wide Web to ensure the
information is preserved in an
archive for future researchers,
historians, and the public.
20. Challenges
• How it is done technically?
• What should I choose to archive?
– The whole website? some pages? Some files only?
• What do I want to do with the web archive I’m
creating?
• Who will have access?
• Who is the owner of the web archive content?
21. Archiving web pages is a technical challenge
File(s)
Software
Hardware
RECORD
Generic file archiving operation
22. Archiving web pages is a technical challenge
File(s)
File(s)
Software
File(s)
File(s)
Software
Hardware
File(s)
Software
File(s)
File(s)
Web archiving operation
Website
23. How it is done?
• Possible web archiving targets:
–
–
–
–
Government websites, Educational institutions,
People’s suggestions, Currently popular websites,
Popular media, Big companies,
Special events
25. Who is working on web archiving?
Many important organisations work on
web archiving since 1996.
26. International Internet Preservation Consortium
• IIPC Members
–
–
–
–
–
National Libraries,
Academic Libraries,
Cultural Organisations,
Universities,
Software development companies
• Web Archiving Timeline
– http://timeline.webarchivists.org/
27. Obligation of the National Library
• According to UNESCO:
– «a national library is responsible for the
collection and storage of the national
cultural heritage».
• In Greece, accoding to law No.3149/03:
– «publishers or authors (when there is no
publisher) of any printed material, are
obliged to submit three copies of their work
to the National Library of Greece. This
obligation also includes audiovisual and epublishing material».
• What about the Greek web?
28. Bibliothèque nationale de France
2006: legal deposit extended to
“signs, signals, writings,
images, sounds or messages of
any kind communicated to the
public by electronic means”.
The goal is not to gather the «best of the web»,
but to preserve a collection representative of the web
at a certain date.
29. Can we save the web?
• What do you mean?
• What is web archiving?
• The practical use of web archives.
• Making your own web archive.
36. Making your own web archive
• Using HTTrack software (Open Source)
– Installation
– Practical advice
– Features
– Usage scenarios
• Archive http://2013.futurelibrary.gr/
• Archive http://www.auth.gr/
37. Things worth considering
• Set Limits
– Filters to define the file types you want to copy.
– Bandwidth limits & Connection limits to avoid overloading the
site you are archiving AND avoid saturating your library network.
– Time limits
• Check the size of the files you have downloaded.
• Plan for disk space according to your needs.
• Check target website copyrights. Are you allowed to:
– Archive for personal use?
– Archive for public use in library computers?
– Archive to publish on the web?
• If you are not sure, please ask the website owner before
beginning web archiving.
38. Scenario: create your own mini web
archive in your library on a shoestring.
• Equipment:
– Typical Windows computer with the biggest possible hard
disk. (The more ΤΒ, the better).
– Equal backup disk (e.g. External USB hard disk).
– DSL Internet connection.
– HTTRACK open source software
• Select important local websites.
• Get permissions from website owners if necessary.
• Setup a regular web archiving schedule (e.g. Once per
month).
• Provide information and access to the web archive in
your library’s local computers for the public.
39. Can we save the web?
YES WE CAN!
• Questions?
• Thank you for your attention
• Contact:
– Web: http://vbanos.gr
– Email: vbanos@gmail.com
– Twitter: @vbanos